Services & Resources / Wolfram Forums
-----
 /
MathGroup Archive
2000
*January
*February
*March
*April
*May
*June
*July
*August
*September
*October
*November
*December
*Archive Index
*Ask about this page
*Print this page
*Give us feedback
*Sign up for the Wolfram Insider

MathGroup Archive 2000

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Parallel programming.

  • To: mathgroup at smc.vnet.net
  • Subject: [mg22962] Re: Parallel programming.
  • From: dek at adsl-63-193-244-224.dsl.snfc21.pacbell.net (David Konerding)
  • Date: Fri, 7 Apr 2000 02:54:43 -0400 (EDT)
  • Organization: SBC Internet Services
  • References: <8cha2f$975@smc.vnet.net>
  • Sender: owner-wri-mathgroup at wolfram.com

On 6 Apr 2000 02:15:43 -0400, Dr. David Kirkby <davek at medphys.ucl.ac.uk> wrote:
>I use Mathematica at work on several fast machines, and at home on a
>rather old Sun SPARC 20. The latter machine does have the advantage of
>multiple CPUs, although none are exactly fast (quad 125 MHz
>HyperSPARCs). Mathematica does not use these multiple CPUs, but I see
>there is a parallel computing toolkit available now.
>http://www.wolfram.com/news/pct.html
>Does anyone know how this works ? I can't justify the cost for a home
>machine, but given it is all written in Mathematica code (I believe), it
>would seem that it would not take a lot of effort to implement some of
>this, for simple matrix problems. My licence allows me to run multiple
>kernels on this quad CPU sparc, suggesting it would be possible to use
>the power of my 4 cpus.

I just took a look at the page.  It looks pretty obvious to me:
they have a method for launching Mathematica kernels on remote machines,
then passing out work units to the slave kernels.  This sort of
distributed computing based on message passing is nothing new,
but the convenience of having it hooked into the Mathematica
front end is really nice.

There is a "Parallel" package of code.  One function is ParallelEvaluate
which will wrap around most normal Mathematica programs and "automagically"
run your code in parallel (where it can be parallelized).  For example,
matrix multiplcation could be farmed out in pieces to slave kernels
then reassembled on the main kernel when complete.

On the other hand, they also provide support for making your own programs
using scheduling of your own design.  This is typically necessary for
"hard" problems where the computer can't automagically determine
the optimal parallelism.  

It sounds to me like your quad SPARC might work pretty well for this
but I cannot make any guarantees.  Alternatively, a good fast new
PC costing about $2,000 might actually give you better performance than
the quad Sparc.  Furthermore, if you were really into it,
you could get parallel Mathematica running on a cluster of PC boxes using
100BT interconnect.   



  • Prev by Date: Re: 2-D Vector Field scale ?
  • Next by Date: Re: Command to get a notebook's directory?
  • Previous by thread: Parallel programming.
  • Next by thread: Re: Parallel programming.