Re: Parallel programming.
- To: mathgroup at smc.vnet.net
- Subject: [mg22938] Re: Parallel programming.
- From: Jens-Peer Kuska <kuska at informatik.uni-leipzig.de>
- Date: Fri, 7 Apr 2000 02:54:25 -0400 (EDT)
- Organization: Universitaet Leipzig
- References: <8cha2f$975@smc.vnet.net>
- Sender: owner-wri-mathgroup at wolfram.com
Hi, it works fine. It is pure Mathematica and connects the Kernel via MathLink. To run it on a 4 CPU machine you need 4 kernel licenses. We use it here with 10 site licenses on a Origin with 4 CPU's and on a Linux Cluster. The speed gain with Mathematica is large - on my test problems it was easy to obtain the "ideal" speed up of a factor N when N CPUs are used. But it should be clear, that you waste CPU power and memory if you run only numeric simulations. MathLink itself can be used to do this (with out a kernel license) and MPI is a free and (from my view point) more robust protocol to do numeric's only. The main advantage in this case is, you have the speed burst by the C-program *and* a possible speed gain due to parallel execution. For 4 CPU's this can mean 120 x faster with C && parallel computing instead of a factor of 4 by the parallel computing toolkit alone. The parallel tookit implements a simple Master/Worker model and you will need some additional programming to setup other models or topologies. For simple matrix problems I can't recomment any parallel programming. The overhead (by shared memory implementation as well as for distributed memory applications) is too huge. The only pure matrix operation that look good are LU solutions or iterative solvers. But especial LU factorisations will be a bit hard to implement (pivoting). Only a few parallel algorithms for symbolic computing are known -- but this is the topic where the parallel toolkit on the top of Mathematica show the true power. You can have two examples of a parallel Runge-Kutta code for 2 CPU's as pure C/MathLink version and as Mathematica/Parallel Computing Tookit version. The pure C/MathLink version is called form Mathematica and send the results back to one kernel when the parallel computation is finished. Hope that helps Jens "Dr. David Kirkby" wrote: > > I use Mathematica at work on several fast machines, and at home on a > rather old Sun SPARC 20. The latter machine does have the advantage of > multiple CPUs, although none are exactly fast (quad 125 MHz > HyperSPARCs). Mathematica does not use these multiple CPUs, but I see > there is a parallel computing toolkit available now. > http://www.wolfram.com/news/pct.html > Does anyone know how this works ? I can't justify the cost for a home > machine, but given it is all written in Mathematica code (I believe), it > would seem that it would not take a lot of effort to implement some of > this, for simple matrix problems. My licence allows me to run multiple > kernels on this quad CPU sparc, suggesting it would be possible to use > the power of my 4 cpus. > > Any suggestions or thoughts ? >