[Date Index]
[Thread Index]
[Author Index]
RE: Dependence of precision on execution speed of Inverse
*To*: mathgroup at smc.vnet.net
*Subject*: [mg81715] RE: Dependence of precision on execution speed of Inverse
*From*: "Andrew Moylan" <andrew.j.moylan at gmail.com>
*Date*: Tue, 2 Oct 2007 05:34:37 -0400 (EDT)
*References*: <002f01c80444$97d4ebe0$8f01a8c0@moose> <47012632.6050307@wolfram.com>
Yep I agree Daniel; that's the crucial question. My implicit assumption has
been that the underlying arbitrary precision arithmetic isn't the main
contributor to the factor of ~150, but perhaps it is.
-----Original Message-----
From: Daniel Lichtblau [mailto:danl at wolfram.com]
Sent: Tuesday, 2 October 2007 2:54 AM
To: Andrew Moylan
Cc: Mathgroup
Subject: [mg81715] Re: Dependence of precision on execution speed of Inverse
Andrew Moylan wrote:
> [...]
>
> Ah I see, thanks for your reply Daniel. So the arbitrary precision
> numbers in Mathematica are each something like a struct with {a
> pointer to the arbitrary precision mantissa data, the exponent, the
> length of the mantissa data};
Correct.
> and because each element in a matrix could in principle have a
> *different* arbitrary precision, there's no way to pack the array into
> a contiguous lump of memory. So there's no way around de-referencing a
> lot of pointers.
Also correct.
> But Daniel, would you agree that for (hypothetical) *fixed* precision
> (across the whole matrix) non-machine-precision matrices of numbers,
> it
> *would* be possible to create the analogue of packed arrays and
> therefore make optimised routines to run on them, analogous to the
> routines that currently operate on packed arrays of machine-precision
> numbers?
Possibly. Except the optimality improvements might or might not actually
materialize. More below.
> I guess one can write a code (in C++ or such) that can invert
> non-machine-precision matrices (and do other operations for which
> Mathematica employs packed arrays only for machine precision numbers)
> tens of times faster than Mathematica can, by combining something like
> the GNU multiple precision library (http://gmplib.org) with BLAS-like
> linear algebra code.
>
> I don't know how whether the efficiency of linear algebra at
> higher-than-machine-precision affects many users, but it has come up in
> my application so I am quite interested in the possibilities!
The question amounts to this. How much of the relative time difference
(vs. machine arithmetic linear algebra using Lapack/BLAS) is due to
locality of reference for, say, dot products, and how much to software
implementation of the underlying arithmetic? I do not know the answer.
If a large part is from the arithmetic, then you gain but little from
emulation of packing for bignums.
Daniel Lichtblau
Wolfram Research
Prev by Date:
**Re: Modifying the Default stylesheet?**
Next by Date:
**Re: help with polynomial solutions**
Previous by thread:
**Re: Dependence of precision on execution speed of Inverse**
Next by thread:
** Re: Dependence of precision on execution speed of Inverse**
| |