Services & Resources / Wolfram Forums
-----
 /
MathGroup Archive
1998
*January
*February
*March
*April
*May
*June
*July
*August
*September
*October
*November
*December
*Archive Index
*Ask about this page
*Print this page
*Give us feedback
*Sign up for the Wolfram Insider

MathGroup Archive 1998

[Date Index] [Thread Index] [Author Index]

Search the Archive

why use ValueBox? (was: saving notebook styles)


  • To: mathgroup@smc.vnet.net
  • Subject: [mg11018] why use ValueBox? (was: saving notebook styles)
  • From: "P.J. Hinton" <paulh@wolfram.com>
  • Date: Mon, 16 Feb 1998 18:15:23 -0500
  • Organization: Wolfram Research, Inc.

On Mon, 16 Feb 1998, Paul Abbott wrote:

> P.J. Hinton wrote:
> 
> > $TopDirectory and $PreferencesDirectory may be determined by pasting
> > value display objects in a notebook text cell:
> 
> Why not just paste $TopDirectory or $PreferencesDirectory into an Input
> cell and evaluate it?  On my Macintosh I get
> 
> In[1]:= $TopDirectory
> Out[1]= Tigger:Applications:Mathematica 3.0.1
> In[2]:= $PreferencesDirectory
> Out[2]= Tigger:System:Preferences:Mathematica:3.0
> 
> which is equivalent to the information via Create Value Display Object.

For most cases, this approach will suffice.  But I'm trying to be as
accurate and general as possible, and that means taking into account
the ever-so-distant possiblity that the user is running a remote
kernel.  In this case, the global session variables in question for the
front end and the kernel will be two, completely different values. 
Since we're dealing with files that are of relevance to the front end's
filesystem, the use of ValueBoxes is probably the safer (albeit
awkward) way to go. 

--
P.J. Hinton
Mathematica Programming Group           paulh@wolfram.com Wolfram
Research, Inc.                  http://www.wolfram.com/~paulh/


	id LAA13540; Mon, 16 Feb 1998 11:33:23 -0500
	(peer crosschecked as: dragonfly.wolfram.com [140.177.10.12])
	id QQeczq18179; Mon, 16 Feb 1998 11:36:45 -0500 (EST)
	by dragonfly.wolfram.com (8.8.8/8.8.8) id KAA17239;
	Mon, 16 Feb 1998 10:36:44 -0600 (CST) From: Daniel Lichtblau
<danl@wolfram.com> To: mathgroup@smc.vnet.net
Subject: [mg11018] Re: Is there a lowest Eigenvalues function around?
Organization: Wolfram Research, Inc. References:
<6c8q6s$ats@smc.vnet.net> Mime-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii" Content-Length: 4343

Christopher R. Carlen wrote:
> 
> I have to find the eigenvalues of very large matrices, ie 1024x1024 up
> to over 10000x10000 .  I run out of memory when trying to do more than
> about 1600x1600 .  I know there are algorithms to find the lowest or
> highest eigenvalues of a matrix, but the Mathematica function
> Eigenvalues[] finds all of them.
> 
> Does anyone know if there is an implementation of a lowest-eigenvalues
> function anywhere?  I have looked aroung at www.wolfram.com but didn't
> find anything.
> 
> Also, I wonder about memory usage in storing matrices.  If a matrix is
> sparse, does it require less memory to store and manipulate than a
> not-sparse matrix?  Does a matrix with only real values require less
> memory than a complex matrix?
> 
> Thanks.
> --
> _______________________
> Christopher R. Carlen
> crobc@epix.net          <--- Reply here, please. carlenc@cs.moravian.edu
> My OS is Linux v2


I do not know of such an implementation. Let me address the questions in
the last paragraph, then I'll come back to this sparse eigenvalue
problem.

Our vanilla matrix representation is as a list of lists. Zeroes are
represented explicitly, hence no savings in storage. A real-only matrix
will be a percentage smaller than a generically complex-valued matrix.
In our next release there will be general improvements in our memory
requirements for matrices of machine numbers (real or complex). All the
same, zeroes will occupy space in matrices. There will also be some
support for solving sparse systems of equations, not using matrices but
rather with a sparse input. Which brings us back to finding the
smallest eigenvalues.

For anything really useful, you need to consult with the numerical
analysis experts. I'll outline a standard quick-and-dirty method for
approximating the smallest eigenvalue of a nonsingular matrix A. This
smallest is the reciprocal of the largest eigenvalue of Inverse[A]. To
approximate the largest one can use the power method for a matrix B,
which works as follows. Take a random vector, vec, and form oldvec =
vec; vec = B.oldvec;
Do this a repeatedly. The component in the direction of the largest
eigen subspace will tend to dominate after a few iterations. Now take
the ratio of, say, the largest component of vec to the same component
in oldvec, to get the largest eigenvalue. (I truly hope no numerical
analysts are reading this because, except for the fact that it is easy
to code, this is a quite naive method compared to its more
sophisticated cousins, with whom I do not pretend to be acquainted).

But we need the largest eigenvalue for Inverse[A], and we have only A,
not its inverse. So how to form vec = Inverse[A].oldvec? This amounts
to solving a linear system. An equivalent formulation is to solve for
vec in the equation
A.vec==oldvec. Our next release of Mathematica will provide some support
for this, given sparse A, but for now you might want to look into a
quick-and-dirty implementation of, say, the conjugate-gradient method
for solving a sparse system of linear equations. You could represent
the matrix as a set of triples of the form {row,col,val} and then write
a sparseDot routine to handle matrix.vector operations (as would be
needed to implement CG). If you like I can send you a primitive CG
implementation I once wrote. You'll first need to promise on something
sufficiently sacred that you will not hold it against me if the code
fails miserably for your task at hand; as you might guess, I have but
little confidence that my CG is bug-free or terribly robust, and I've
no idea what size problem it might handle.

If you get outrageously large eigenvalues for Inverse[A] then you might
well suspect that the smallest eigenvalue of A is simply zero. In other
words, even if A is singular you can expect the method to give
more-or-less the correct smallest eigenvalue.

To get better approximations using faster code of more than one smallest
eigenvalue, you might want to look into something called Lanczos
algorithm. Possibly there is canned software available that could be
called via MathLink. Below are some URLs that may be of help.

http://www.netlib.org/liblist.html
http://www.caam.rice.edu/~kristyn/parpack_home.html
http://www.netlib.org/scalapack/readme.arpack
http://www.netlib.org/sparse/index.html


Daniel Lichtblau
Wolfram Research



  • Prev by Date: Re: any ideas on this ? (solving nonlinear equations)
  • Next by Date: "Abelson & Sussman" meets Mathematica?
  • Prev by thread: Re: any ideas on this ? (solving nonlinear equations)
  • Next by thread: "Abelson & Sussman" meets Mathematica?