[Date Index]
[Thread Index]
[Author Index]
Re: ParallelDo and C-compiled routines
*To*: mathgroup at smc.vnet.net
*Subject*: [mg121796] Re: ParallelDo and C-compiled routines
*From*: Patrick Scheibe <pscheibe at trm.uni-leipzig.de>
*Date*: Mon, 3 Oct 2011 04:20:27 -0400 (EDT)
*Delivered-to*: l-mathgroup@mail-archive0.wolfram.com
*References*: <j5ug1a$7r5$1@smc.vnet.net> <201109290605.CAA22485@smc.vnet.net>
If you are a bit expierenced with C/C++ it is really easy. In the
simplest case, the only thing you do is to write a small (10 lines of
code) wrapper for your C++ function.
In this function you:
1. catch the numeric parameter for your function call from Mathematica
2. call your C++ function
3. return the result to Mathematica
The C-source-file where you put this function looks like the example
which can be found here:
Needs["CCompilerDriver`"];
demoFile =
FileNameJoin[ {$CCompilerDirectory, "SystemFiles", "CSource",
"createDLL_demo.c"}];
FilePrint[demoFile]
The important stuff is the incrementInteger function. The other
initialize, uninitialize and version function can just be copied for the
moment.
In the incrementInteger function you see, that parameters are catched
from Mathematica the same way as you would catch them in your main for
command-line parameters. You have an argument-count and a list of args.
The line I1=I0+1 is where you would put your C++ function call. The
result can then be stored in Res.
If you have wrote this wrapper for your function, you compile it
together with your other source code with CreateLibrary (you could of
course use a Makefile too). Please not, that CreateLibrary has a hell of
additional options. You can add defines, other libraries, print the
command line, print the output of the compile, ...
If you have successfully created your dll|so|dylib you can load the
library function with something like
fun = LibraryFunctionLoad[lib, "incrementInteger", {Integer}, Integer]
Here you define what kind of input parameters this function takes and
what type of result it has.
There are important sources of documentation. The first one
LibraryLink/tutorial/Overview
is the tutorial to the "Wolfram LibraryLink". Here you find what you
need to know about how arguments are passed to the library, error
handling, types and library-functions. If you have read this, you have
all knowledge to create your own lib.
If you want to compile the library from within Mathematica, you should
read
CCompilerDriver/tutorial/Overview
Everything you need to know about how libs are found, how you load
functions from them and the LibraryLink datatypes are collected here
guide/LibraryLink
Hope this helps to get you started.
Cheers
Patrick
On Sun, 2011-10-02 at 09:49 -0300, Gabriel Landi wrote:
> Yes, you are absolutely right Patrick.
>
> The only thing I disagree is the "as easily available".
> I have a code that solves stochastic differential equations and I have tried to implement it in Mathematica.
> So far I have had a pretty hard time. Can you point me in the right direction? It is written in C++.
> Is there an easy way to add the wolfram libraries so that it can be accessed inside Mathematica?
>
> Thanks in advance,
>
> Gabriel
>
>
> On Oct 2, 2011, at 6:43 AM, Patrick Scheibe wrote:
>
> > The communication overhead is beyond good and evil. I assumed someone
> > who asked a question about ParallelDo is kind of concerned about
> > *speed*:
> >
> > In[19]:= First@
> > AbsoluteTiming@
> > Table[ReadList["! tmp/square.exe " <> ToString[i], Number], {i,
> > 10000}]
> >
> > Out[19]= 48.957941
> >
> > the equivalent library function:
> >
> > In[18]:= First@AbsoluteTiming@Table[cfun, {i, 10000}]
> >
> > Out[18]= 0.005934
> >
> > I admit, that a function like "square" is a bit too short for a
> > "longRoutine", but why using this kind of communication when the faster
> > solution is as easily available?
> > Maybe sometimes the "old school" is for some purposes just outdated.
> >
> > Cheers
> > Patrick
> >
> > On Sun, 2011-10-02 at 02:36 -0400, Gabriel Landi wrote:
> >> You can always try the 'old school' way. Use Mathematica as a command prompt:
> >>
> >> ParallelDo[result[i] =ReadList[ "! ./c_code.exe", Number], {i,number}]
> >>
> >> Works perfectly.
> >>
> >>
> >>
> >>
> >> On Sep 30, 2011, at 5:03 AM, Patrick Scheibe wrote:
> >>
> >>> On Thu, 2011-09-29 at 02:05 -0400, DmitryG wrote:
> >>>> On Sep 28, 2:49 am, DmitryG <einsch... at gmail.com> wrote:
> >>>>> Hi All,
> >>>>>
> >>>>> I am going to run several instances of a long calculation on =
> >> different
> >>>>> cores of my computer and then average the results. The program looks
> >>>>> like this:
> >>>>>
> >>>>> SetSharedVariable[Res];
> >>>>> ParallelDo[
> >>>>> Res[[iKer]] = LongRoutine;
> >>>>> , {iKer, 1, NKer}]
> >>>>>
> >>>>> LongRoutine is compiled. When compiled in C, it is two times faster
> >>>>> than when compiled in Mathematica. In the case of a Do cycle, this
> >>>>> speed difference can be seen, However, in the case of ParallelDo I
> >>>>> have the speed of the Mathematica-compiled routine independently of
> >>>>> the CompilationTarget in LongRoutine, even if I set NKer=1.
> >>>>>
> >>>>> What does it mean? Are routines compiled in C unable of parallel
> >>>>> computing? Or there is a magic option to make them work? I tried
> >>>>> Parallelization->True but there is no result, and it seems this =
> >> option
> >>>>> is for applying the routine to lists.
> >>>>>
> >>>>> Here is an example:
> >>>>> ************************************************************
> >>>>> NKer = 1;
> >>>>>
> >>>>> (* Subroutine compiled in Mathematica *)
> >>>>> m = Compile[ {{x, _Real}, {n, _Integer}},
> >>>>> Module[ {sum, inc}, sum = 1.0; inc = 1.0;
> >>>>> Do[inc = inc*x/i; sum = sum + inc, {i, n}]; sum]];
> >>>>>
> >>>>> (* Subroutine compiled in C *)
> >>>>> c = Compile[ {{x, _Real}, {n, _Integer}},
> >>>>> Module[ {sum, inc}, sum = 1.0; inc = 1.0;
> >>>>> Do[inc = inc*x/i; sum = sum + inc, {i, n}]; sum],
> >>>>> CompilationTarget -> "C"];
> >>>>>
> >>>>> (* There is a difference between Mathematica and C *)
> >>>>> Do[
> >>>>> Print[AbsoluteTiming[m[1.5, 10000000]][[1]]];
> >>>>> Print[AbsoluteTiming[c[1.5, 10000000]][[1]]];
> >>>>> , {iKer, 1, NKer}]
> >>>>> Print[];
> >>>>>
> >>>>> (* With ParallelDo there is no difference *)
> >>>>> ParallelDo[
> >>>>> Print[AbsoluteTiming[m[1.5, 10000000]][[1]]];
> >>>>> Print[AbsoluteTiming[c[1.5, 10000000]][[1]]];
> >>>>> , {iKer, 1, NKer}]
> >>>>> **************************************************************
> >>>>>
> >>>>> Any help?
> >>>>>
> >>>>> Best,
> >>>>>
> >>>>> Dmitry
> >>>>
> >>>> My theory is the following. C compiler creates an executable that is
> >>>> saved somewhere on the hard drive and then run by Mathematica Kernel.
> >>>> Windows may not allow different applications (such as different
> >>>> Mathematica kernels in parallel computation) access a file at the =
> >> same
> >>>> time.
> >>>>
> >>>> If this is true, the solution were to create copies of this =
> >> executable
> >>>> on the hard drive, so that each kernel could run its copy.
> >>>>
> >>>> Dmitry
> >>>>
> >>>
> >>> No, not exactly. The compiler creates a library which is a dll in your
> >>> (Microsoft Windows) case or a shared object on Linux or a dylib on
> >>> MacOSX.
> >>>
> >>> When you compile a function into "C" than a library is created and the
> >>> library function of this dll|so|dylib is accessed when you call the
> >>> compiled function in your Mathematica session.
> >>>
> >>> On my Linux box these created C-libraries are stored in my
> >>> $UserBaseDirectory under
> >>>
> >>> $UserBaseDirectory/ApplicationData/CCompilerDriver/BuildFolder
> >>>
> >>> and then every unique MathKernel (with which you compile the function)
> >>> gets its own subdirectory. This means, if my currently running
> >>> MathKernel has an process id of, say 2088, I get a subdirectory
> >>>
> >>> warp-2088
> >>>
> >>> under the above mentioned folder. "warp" is here the name of my =
> >> machine.
> >>> This information is available in your "CompiledFunction" object too.
> >>> Look at
> >>>
> >>> c // InputForm
> >>>
> >>> of your function and notice how Oleksandr show in his mail how to
> >>> accesses this information to load the compiled function separately for
> >>> each kernel.
> >>>
> >>> Beside the expanation of Oleksandr, which describes your behavior in
> >>> detail, I just want to add, that you don't have to recompile a =
> >> function
> >>> everytime you restart the kernel. You could use LibraryGenerate to
> >>> create a library which is permanently available (it seems that the
> >>> libraries created with Compile[...,CompilationTarget->"C"] are deleted
> >>> when the kernel quits). So with your MVM CompiledFunction you could
> >>> create your lib with:
> >>>
> >>> << CCodeGenerator`
> >>>
> >>> m = Compile[{{x, _Real}, {n, _Integer}},
> >>> Module[{sum, inc}, sum = 1.0; inc = 1.0;
> >>> Do[inc = inc*x/i; sum = sum + inc, {i, n}]; sum]];
> >>> LibraryGenerate[m, "longRoutine"]
> >>>
> >>>
> >>> loadLib[] :=
> >>> LibraryFunctionLoad["longRoutine",
> >>> "longRoutine", {{Real, 0, "Constant"}, {Integer, 0, "Constant"}},
> >>> Real] ;
> >>>
> >>> brandNewC = loadLib[];
> >>> NKer = 1;
> >>> ParallelDo[
> >>> brandNewC = loadLib[];
> >>> Print[AbsoluteTiming[brandNewC[1.5, 10000000]]],
> >>> {iKer, 1, NKer}
> >>> ]
> >>>
> >>>
> >>> Cheers
> >>> Patrick
> >>>
> >>>
> >>
> >>
> >
> >
>
Prev by Date:
**Re: A collection of Mathematica learning resources**
Next by Date:
**Re: ParallelDo and C-compiled routines**
Previous by thread:
**Re: ParallelDo and C-compiled routines**
Next by thread:
**Re: ParallelDo and C-compiled routines**
| |