MathGroup Archive 2001

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: ParallelMap inefficient?

  • To: mathgroup at
  • Subject: [mg27336] Re: ParallelMap inefficient?
  • From: Mike Yukish <may106 at>
  • Date: Wed, 21 Feb 2001 03:17:14 -0500 (EST)
  • Organization: Penn State University, Center for Academic Computing
  • References: <96irro$> <XPIj6.5844$> <96q2ha$> <96t8op$>
  • Sender: owner-wri-mathgroup at

Hello again,

Jens-Peer Kuska wrote:

> In the simple examples you gave, the overhead for exchanging data and
> data
> will dominate allmost everything

I've traded email with the Mathematica support folks since, and here is the nugget of my

Make a list of 100 elements.

list = {0, ...., 100};

Start one remote processor. Compare these calls...

ParallelMap[Sin, list];

Map[RemoteEvaluate[Sin[#]] &, list];

Both calls do the same thing, and theoretically have the exact same amount of
communication and computation. Yet you see a six-fold difference in the time spent
other than calculating, between these two. The first call takes 200 seconds. The
second call takes 30 seconds. With two processors, it is still better to Map[ ] it
across a single remote than it is to ParallelMap[ ] it across two remote. And so on
up to six or so remote processors. These processors are on PCs communicating across
a 10 MB ethernet hub. An even faster call is:


Which is almost instantaneous. This points out that it is much better to split a
list up into N segments for N processors and farm them out, than it is to let
ParallelMap[ ] handle the comms for you. The advantages for using ParallelMap[ ]
are that it handles the admin details, and it will handle cases where the task time
for each element differs.

  • Prev by Date: Re: simple Problem: D[ ]
  • Next by Date: A bug of Integrate[] in Mathematica 4.1
  • Previous by thread: Re: ParallelMap inefficient?
  • Next by thread: Re Complicated Rotation