MathGroup Archive 2010

[Date Index] [Thread Index] [Author Index]

Search the Archive

Comments on CUDA and Compile

  • To: mathgroup at smc.vnet.net
  • Subject: [mg114051] Comments on CUDA and Compile
  • From: Mark McClure <mcmcclur at unca.edu>
  • Date: Mon, 22 Nov 2010 07:36:48 -0500 (EST)

There's been a tremendous amount of hype and excitement concerning
CUDA on this list.  While this is certainly well deserved, I think it
has overshadowed the tremendous advancements in V8's Compile command
In my experiments, compiled code in V8 can run 10-20 faster than
compiled code in V7 (using the CompilationTarget -> "C").  Native
Mathematica code can now closely approach that of pure C.  I set up a
little blog post demonstrating this:
http://facstaff.unca.edu/mcmcclur/blog/CompileForComplexDynamics.html

Consider the following limitations of GPU programming (CUDA or OpenCL):
  * Anyone you want to share your code with must also be CUDA enabled,
  * CUDA works via massive parallelization, but not all problems
parallelize well,
  * CUDA works in single precision (except for the highest end GPUs)
  * Unless you just want to use a few of the CUDA* commands,
     you really need to program in at least snippets of CUDA to get
the full benefit.

Using Compile, by contrast, is
  * Much easier to share with others (a compiler is the only
additional tool needed),
  * Works well in serial or in parallel,
  * Works in double precision,
  * Allows you to program directly in Mathematica.

Of course, these issues surrounding GPU programming are likely to
improve.  Within a few years, more people will have GPUs that run in
double precision.  Also, CUDA and/or OpenCL will be more tightly
integrated into Mathematica.  I notice that there are already several
functions of the form SymbolicCUDA*, as well as SymbolicOpenCL*.  This
suggests that work has already been done to set up an abstraction
layer easing translation from Mathematica to CUDA or OpenCL.  Thus, I
wonder whether it really makes sense for folks to learn to program in
CUDA for V8, when V9 is likely to offer tighter integration?  My guess
is that there will ultimately be a CompilationTarget -> "CUDA" option
for Compile.


Nonetheless, I am a bit excited about CUDA myself and I can see why
others might be.  Thus, I thought I'd share the difficulties I had
getting CUDA running and how I overcame those difficulties.  These
observations apply directly to my 1.5 year old Macbook Pro, but
similar issues are likely applicable to other systems

As is well known, CUDA runs only on computers with an NVIDIA GPU.
There are a number of software requirements as well.  In particular,
if CUDAQ[] returns False even though you have an NVIDIA GPU, you might
still be able to get CUDA running.  These requirements include:
  * A recent copy of OS X (at least 10.6.3 is required, I think)
  * A C compiler (included with XCODE on the Macintosh)
  * An NVIDIA driver (free download from the NVIDIA site)
  * A CUDAResources Paclet

I beta tested V8 and was never able to get CUDA running until getting
some assistance at the recent Wolfram Tech Conference.  The missing
ingredient was the CUDAResources Paclet, which one of the Wolfram
folks kindly installed manually for me.  When I received the final V8
release however, CUDA was no longer working.  Evidently, the
CUDAResources Paclet that worked with the beta was not compatible with
the final version.  I was able to get CUDA working again by running
CUDAResourcesUninstall[] followed by CUDAResourcesInstall[].  I'm not
sure if this sequence would be required for someone just trying V8,
but it might be worth a try.

Mark McClure


  • Prev by Date: Re: Mathematica 8: first impressions
  • Next by Date: Re: Mathematica 8
  • Previous by thread: Controlling relative scale of graphics objects
  • Next by thread: elementary import