Re: Re: Permanent Computation Efficiency
- To: mathgroup at smc.vnet.net
- Subject: [mg105458] Re: [mg105436] Re: Permanent Computation Efficiency
- From: Daniel Lichtblau <danl at wolfram.com>
- Date: Thu, 3 Dec 2009 06:15:53 -0500 (EST)
- References: <200911181200.HAA04436@smc.vnet.net> <he372o$err$1@smc.vnet.net> <200912021128.GAA26826@smc.vnet.net>
g.resta at iit.cnr.it wrote: > The best algorithm known for the computation of the permanent > in the general case is the Ryser algorithm and has a cost of > O(n 2^n) operations or O(n^2 2^n) depending on the implementation. > > http://mathworld.wolfram.com/RyserFormula.html > > It is quite simple to implement and it is much faster than the > recursive > algorithm you are discussing, unless the matrix is so sparse > that there are relative few submatrices whose permanent has to > be computed. > > Several years ago I used it for computing the permanent > of 0-1 matrices, but I implemented it in C, where some binary tricks > can be > used. > > On current hardware, the computation of the permanent > of a random NxN 0-1 matrix (with about one half of entries equal to 1) > takes less than 0.01 sec. for N=16 and about 1 second for N=24. > I do not know how efficient can be Mathematica in comparison. > > giovanni resta > -- > http://anagram.it : anagrams, alphametics, arithmogriphs,... I think the memoized version of the recursive method is also showing O(n*2^n) complexity for the class of inputs you indicate. Below is an experiment to this effect. In[15]:= permanent2[m_] /; Length[m] == 1 := m[[1, 1]] permanent2[m_] := permanent2[m] = With[{mp = Drop[m, None, 1]}, Apply[Plus, Table[m[[j, 1]]*permanent2[Drop[mp, {j}]], {j, Length[m]}]]] We time examples from dimensions 8 through 20. In[3]:= data = Table[Timing[permanent2[RandomInteger[1, {n, n}]]], {n, 8, 20, 2}] Out[3]= {{0.004999, 12}, {0.029996, 2062}, {0.12898, 8746}, {0.609907, 1823066}, {2.77258, 194056418}, {13.9019, 6047256830}, {63.5183, 347782811770}} Now fit to the logs of the timings. This may or may not be an ideal thing to do; I did it in order to decrease potential for dominance by the larger terms. In[4]:= timings = data[[All, 1]]; ldata = Log[2, timings]; ld2 = Transpose[{Range[8, 20, 2], ldata}]; ff = FindFit[ld2, n + Log[2, n] + b, b, n] Out[7]= {b -> -18.4743} We check the resulting fit. In[8]:= timings - Table[2^(n + Log[2, n] + b) /. ff, {n, 8, 20, 2}] Out[8]= {-0.000624668, 0.00187766, -0.00598802, -0.0199438,\ -0.10674, 0.944957, 5.93199} Seems not too bad, as fits go. So the absolute speed appears to be about two orders of magnitude slower than what you were able to do in C, but the overall complexity looks to be as advertised. Moreover, if I let the power of n be a parameter to be fitted, I typically get values just a hair larger than 1. I also coded the Ryser formula. I was able to get a version that was on speaking terms with Compile. It is still slower than the above, though not by too much. It also seems to be showing roughly the right complexity. And has the added benefit of not running up the memory tab. I will mention that I did not try the Gray code optimization, so the expected complexity in this case is O(n^2*2^n). permanent3[m_] := With[{len = Length[m]}, (-1)^len*Module[{s, t}, Sum[ s = IntegerDigits[n, 2, len]; t = Total[s]; (-1)^t* Product[Sum[m[[i, j]], {j, Flatten[Position[s, 1]]}], {i, len}] , {n, 2^len - 1}] ]] In[46]:= data3 = Table[Timing[permanent3C[RandomInteger[1, {n, n}]]], {n, 8, 18, 2}] Out[46]= {{0.009998, 125.}, {0.057991, 4066.}, {0.293956, 43379.}, {1.60576, 2.62758*10^6}, {8.77567, 7.19182*10^7}, {44.9152, 5.06598*10^10}} My tentative conclusions are (i) the complexity seems to be as claimed, and (ii) I did not find a particularly fast way to compute this in Mathematica. Daniel Lichtblau Wolfram Research
- References:
- Re: Permanent Computation Efficiency
- From: "g.resta@iit.cnr.it" <g.resta@iit.cnr.it>
- Re: Permanent Computation Efficiency