Services & Resources / Wolfram Forums / MathGroup Archive

MathGroup Archive 2007

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Dependence of precision on execution speed of Inverse

  • To: mathgroup at
  • Subject: [mg81714] Re: Dependence of precision on execution speed of Inverse
  • From: Daniel Lichtblau <danl at>
  • Date: Tue, 2 Oct 2007 05:34:06 -0400 (EDT)
  • References: <002f01c80444$97d4ebe0$8f01a8c0@moose>

Andrew Moylan wrote:
> [...]
> Ah I see, thanks for your reply Daniel. So the arbitrary precision 
> numbers in Mathematica are each something like a struct with {a pointer 
> to the arbitrary precision mantissa data, the exponent, the length of 
> the mantissa data};


> and because each element in a matrix could in 
> principle have a *different* arbitrary precision, there's no way to pack 
> the array into a contiguous lump of memory. So there's no way around 
> de-referencing a lot of pointers.

Also correct.

> But Daniel, would you agree that for (hypothetical) *fixed* precision 
> (across the whole matrix) non-machine-precision matrices of numbers, it 
> *would* be possible to create the analogue of packed arrays and 
> therefore make optimised routines to run on them, analogous to the 
> routines that currently operate on packed arrays of machine-precision 
> numbers?

Possibly. Except the optimality improvements might or might not actually 
materialize. More below.

 > I guess one can write a code (in C++ or such) that can invert
> non-machine-precision matrices (and do other operations for which 
> Mathematica employs packed arrays only for machine precision numbers) 
> tens of times faster than Mathematica can, by combining something like 
> the GNU multiple precision library ( with BLAS-like 
> linear algebra code.
> I don't know how whether the efficiency of linear algebra at 
> higher-than-machine-precision affects many users, but it has come up in 
> my application so I am quite interested in the possibilities!

The question amounts to this. How much of the relative time difference 
(vs. machine arithmetic linear algebra using Lapack/BLAS) is due to 
locality of reference for, say, dot products, and how much to software 
implementation of the underlying arithmetic? I do not know the answer. 
If a large part is from the arithmetic, then you gain but little from 
emulation of packing for bignums.

Daniel Lichtblau
Wolfram Research

  • Prev by Date: Was: Modifying the Default stylesheet?
  • Next by Date: Re: Simplifying Log[a] + Log[expr_] - Log[2 expr_]: Brute force necessary?
  • Previous by thread: Was: Modifying the Default stylesheet?
  • Next by thread: Re: Dependence of precision on execution speed of Inverse