MathGroup Archive 2011

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Compilation: Avoiding inlining

  • To: mathgroup at smc.vnet.net
  • Subject: [mg121891] Re: Compilation: Avoiding inlining
  • From: Oliver Ruebenkoenig <ruebenko at wolfram.com>
  • Date: Thu, 6 Oct 2011 04:20:25 -0400 (EDT)
  • Delivered-to: l-mathgroup@mail-archive0.wolfram.com
  • References: <201110050803.EAA07111@smc.vnet.net>

On Wed, 5 Oct 2011, Oleksandr Rasputinov wrote:

> On Tue, 04 Oct 2011 07:45:30 +0100, DmitryG <einschlag at gmail.com> wrote:
>
>> On Sep 27, 6:24 am, Oliver Ruebenkoenig <ruebe... at wolfram.com> wrote:
>>> On Sat, 24 Sep 2011,DmitryGwrote:
>>>
>>>> A potentially very important question: I have noticed that the program
>>>> we are discussing, when compiled in C, runs on both cores of my
>>>
>>> Only when compiled to C? You could try to set Parallelization->False
>>> and/or it might be that MKL runs some stuff in parallel.
>>>
>>> Try
>>>
>>> SetSystemOptions["MKLThreads" -> 1] and see if that helps.
>>>
>>>> processor. No parallelization options have been set, so what is it?
>>>> Automatic parallelization by the C compiler (I have Microsoft visual
>>>> under Windows 7) ?  Do you have this effect on your computer?
>>>
>>> I can not test that since I use Linux/gcc.
>>>
>>>> However, the programs of a different type, such as my research
>>>> program, still run on one core of the processor. I don't see what
>>>> makes the compiled program run in different ways, because they are
>>>> written similarly.
>>>
>>> I understand that you'd want to compare the generated code with the
>>> handwritten code on the same number of threads but I can not resist to
>>> point out that the parallelization of the C++ code is something that
>>> need=
>> s
>>> to be developed but that parallelization via Mathematica come at almost
>>> not additional cost.
>>>
>>> On a completely different note, here is another approach that could be
>>> taken.
>>>
>>> CCodeGenerator/tutorial/CodeGeneration
>>>
>>> Oliver
>>
>> This behavior is not new to me. Calculating matrix exponentials also
>> leads to a 100% processor usage on multiprocessor computers without
>> any parallelization. I have observed it on my Windows 7 laptop and on
>> a Mac Pro at work. The system monitor shows that only one Mathematica
>> kernel is working but the load of this kernel is much greater than
>> 100%, especially on the Mac Pro that has 8 cores. My laptop may switch
>> off (because of overheating?) during such calculations while the Mac
>> is OK.
>>
>> Also I've seen such a behavior solving PDEs with NDSolve in some
>> cases.
>>
>> I wonder what is happening and I do not know whether this effect is
>> good or bad. As I cannot control it, I cannot measure if such an
>> extensive processor usage leads to a speed-up.
>>
>> I am going to get Mathematica for Linux and test it there, too.
>>
>> Best,
>>
>> Dmitry
>>
>
> In the case of the matrix exponentials (or really any numerical linear
> algebra), this behaviour is undoubtedly due to MKL threading and can be
> controlled by the option Oliver gives above. Obviously it is not good if
> your laptop switches off due to overheating, but this is not so much a
> problem of Mathematica as badly designed cooling in the laptop. MKL's
> threading is carefully done and scales well for moderate numbers of cores,
> so you should be seeing considerably increased performance as a result of
> it on an 8-core machine. In regard to NDSolve, I don't know how this is
> implemented internally and so can't comment on any parallelization that
> might exist.
>
>

Exactly the same - LinearSolve and friends call MKL routines.

Oliver



  • Prev by Date: Re: DynamicModule Pure Function
  • Next by Date: Re: Fully vectorized system of ODE's - any advantage of C?
  • Previous by thread: Re: Compilation: Avoiding inlining
  • Next by thread: Re: Compilation: Avoiding inlining