Re: Controlling # Processors Used by MathKernel?
- To: mathgroup at smc.vnet.net
- Subject: [mg125068] Re: Controlling # Processors Used by MathKernel?
- From: "Oleksandr Rasputinov" <oleksandr_rasputinov at ymail.com>
- Date: Mon, 20 Feb 2012 02:49:51 -0500 (EST)
- Delivered-to: firstname.lastname@example.org
- References: <email@example.com>
On Fri, 17 Feb 2012 11:25:49 -0000, Mark Suhovecky <suhovecky at nd.edu> wrote: > > We're running Mathematica 8.0.1 on Red Hat Enterprise Linux 5.7. This > is an HPC cluster environment, so most folks run Mathematica scripts > in batch using the format math -script example.m > > My usersr are not doing explicit Parallel programming yet, but I've > noticed that some mathematica scripts, when run, take up all all of > the cores available on a machine, i.e. - a top command will show > MathKernal using 800% > on an 8 core machine. > > Is it possible to tell Mathematica how many cores it's restricted to > on a machine, rather than having it grab them all? > > I've tried adding > > Unprotect[$ProcessorCount];ProcessorCount=2; > > to my initfile- no difference. I've also tried adding that to the > start of the script, to no effect. > > If anyone's been able to do this, I'd appreciate knowing how. > > Thanks, > > Mark > The reason for this behaviour is threading within the Intel Math Kernel Library used by Mathematica. As a result one will see it most readily in codes that rely on numerical linear algebra or FFT routines. The number of threads can be adjusted using: SetSystemOptions["MKLThreads" -> 1]; or (I have not tested this, but presumably it works) with the environment variable MKL_NUM_THREADS. However, Mathematica does detect the number of available processor cores when it starts and will use this information to set the number of MKL threads automatically. If you're seeing overallocation of cores in your environment, it's likely that your queueing system isn't set up correctly and is starting processes with erroneous affinity masks (or not setting any affinity mask at all). Best, O. R.