MathGroup Archive 2003

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Re: Parallel Kit Question: ParallelDot is much more slow than Dot

  • To: mathgroup at smc.vnet.net
  • Subject: [mg40511] Re: Re: Parallel Kit Question: ParallelDot is much more slow than Dot
  • From: "Michal Kvasnicka" <michal.kvasnicka at NoSpam.quick.cz>
  • Date: Wed, 9 Apr 2003 01:29:47 -0400 (EDT)
  • Sender: owner-wri-mathgroup at wolfram.com

Yes, yes and once again yes!!!
Bobby simply and clearly express the basic dilemma which must solve
scientist before he chooses right tool to obtain answer on his question in
the acceptable time.

What is better:
- spent a few months to write suitable, efficient and reliable C or Fortran
(with MPI!!!) code and after that get results in minutes? or
- write the same algorithm in Mathematica, or ... etc. in a
hours or days and after that get results in days or weeks?

Of course, completely different situation is in a case of routine long time
number crunching computation,  but I think this case is out of scope of the
discussion.

Michal
"Dr Bob" <majort at cox-internet.com> pí¹e v diskusním pøíspìvku
news:b6tt37$n28$1 at smc.vnet.net...
> >> It is a bit strange to use a interpreter like Mathematica for high
> >> performance computing
>
> True, but it's just a matter of time.  It will happen, more and more.
>
> At http://www.aceshardware.com/read.jsp?id=50000333, somebody says the
3.06
> GHz Pentium might reach 12 gigaflops, and Cray reached 16 gigaflops in
> 1991.  Look at the Alienware Area 51 machine highlighted in the recent
> Computer Power User -- DUAL 3.06 GHz chips with hyperthreading and ALL the
> extras, for $3,800.  How much did the fastest Cray cost in 1991?  A
> thousand times that much?  Ten thousand times that much?  (I don't know.)
>
> I'm guessing there's a 12 year lag between supercomputer performance and
my
> desktop, and it's well worth the wait.
>
> Sure, there are people who "need" that extra power today, not 12 years
from
> now.  But, even 2 years ago, nobody needed it because nobody could get it,
> and the world didn't exactly implode, as I recall.
>
> Anyway, it's nice having the equivalent of a 1990 Cray on my desk at home,
> for only $3,000.
>
> As for the "interpreter" issue itself, high-level languages are WORTH the
> performance penalty.
>
> C code with a lousy algorithm is slower than Mathematic code with a good
> algorithm.  If both use the right algorithm (and how likely is that?), the
> C code may be twice as fast, but that just adds 18 months to the time lag.
> Again, the wait is well worth it, to avoid writing C (rather than thinking
> about math) until the brain cells burn out.
>
> If C code is really a hundred times faster, so I have to wait longer for
> increases in processor speed to wipe out the difference, all the better!
> It's even more time that I could have wasted on a low-level language, but
> didn't, thank goodness.
>
> Bobby
>
> On Mon, 7 Apr 2003 04:56:24 -0400 (EDT), Jens-Peer Kuska
> <kuska at informatik.uni-leipzig.de> wrote:
>
> > Hi,
> >
> > I usual use MPI on a Cray or on a SGI cluster.
> > It is a bit strange to use a interpreter like Mathematica
> > for high performance computing ...
> >
> > Regards
> > Jens
> >
> > nafod40 wrote:
> >>
> >> Jens-Peer Kuska wrote:
> >> > Hi,
> >> >
> >> > parallel commands are usualy slower than serial ones,
> >> > because you have the overhead for process communication.
> >>
> >> Have you used the Parallel Toolkit? My experience shows there are some
> >> gross inefficiencies in their implementation of the ParallelMap[ ]
> >> functions.
> >>
> >> In general, the toolkit is useful if you can decompose your problem
> >> coarsely. Don't use the ParallelMap[ ] function or similar, develop
yuor
> >> own analogs based on RemoteEvaluate[ ].
> >
> >
>
>
>
> --
> majort at cox-internet.com
> Bobby R. Treat
>
>




  • Prev by Date: Re: Integrate Problem
  • Next by Date: Re: Re: Re: Super-Increasing List
  • Previous by thread: Re: Parallel Kit Question: ParallelDot is much more slow than Dot
  • Next by thread: Defining function in loop