MathGroup Archive 2005

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Re: Maximising a sum of matrix elements


Daniel Lichtblau wrote:
> Colin Pratt wrote:
> 
>>Hi
>>
>>Could anyone suggest how I might use Mathematica to tackle the following?
>>
>>Given a n x n matrix with positive entries, find the maximum sum involving one 
>>and only one entry from each row and column. Is there a simple way to 
>>enumerate the n! possible sums?
>>
>>Regards
>>Colin
> 
> 
> I'd not recommend that approach as it can be quite slow for more than 10 
> or so rows.
> 
> This is an example of the Assignment Problem from network programming, 
> dixcussed e.g. in Schrijver's "Theory of Linear and Integer Programming" 
> or Chvatal's "Linear Programming".
> 
> Basically one sets it up as an integer linear program to maximize the 
> sum over {i,j} of x[i,j]*mat[i,j] where all x[i,j] are 0 or 1, for each 
> fixed i we have Sum[x[i,j],{j,n}]==1, and likewise for each fixed j we 
> have Sum[x[i,j],{i,n}]==1.
> 
> The nice feature is that one may solve the relaxed linear program 
> ignoring integrality constraints (that is, just constrain 0<=x[i,j]<=1), 
> and obtain a solution to the problem at hand. This follows from the 
> observation that our matrix of x[i,j]s is doubly stochastic (row sums 
> and column sums are all 1, all elements nonnegative), and permutation 
> matrices are the extremal "points" in the space of such matrices 
> (Birkoff-von Neumann theorem).
> 
> Here is simple code for the problem at hand.
> 
> maxPermutationSum[mat_] := Module[
>    {n=Length[mat], vars, x, c1, c2, c3, fvars, maxsum, res},
>    vars = Array[x,{n,n}];
>    fvars = Flatten[vars];
>    c1 = Thread[Apply[Plus,vars,{1}]==1];
>    c2 = Thread[Apply[Plus,Transpose[vars],{1}]==1];
>    c3 = Map[0<=#<=1&, fvars];
>    constraints = Join[c1,c2,c3];
>    {maxsum,res} =
>      NMaximize[{Apply[Plus,Flatten[vars*mat]],constraints}, fvars];
>    Round[{maxsum, vars/.res}]
>    ]
> 
> Quick example:
> 
> mat[dim_,max_] := Table[Random[Integer,max], {dim}, {dim}]
> 
> In[30]:= InputForm[mm = mat[10,20]]
> Out[30]//InputForm=
> {{12, 6, 18, 6, 11, 8, 8, 4, 10, 12},
>   {8, 14, 3, 7, 18, 20, 10, 7, 17, 19},
>   {7, 7, 6, 19, 3, 19, 1, 0, 12, 18},
>   {3, 10, 3, 0, 15, 3, 16, 1, 19, 3},
>   {9, 14, 20, 14, 3, 8, 11, 1, 7, 12},
>   {11, 10, 9, 2, 11, 14, 2, 2, 18, 20},
>   {17, 11, 18, 9, 2, 0, 18, 11, 14, 16},
>   {20, 2, 16, 3, 14, 7, 13, 13, 7, 16},
>   {15, 7, 15, 2, 3, 19, 11, 7, 13, 8},
>   {13, 16, 3, 15, 8, 18, 4, 14, 16, 16}}
> 
> In[33]:= InputForm[Timing[{m1,p1} = maxPermutationSum[mm]]]
> Out[33]//InputForm=
> {0.21000000000000568*Second, {179, {{0, 0, 1, 0, 0, 0, 0, 0, 0, 0},
>     {0, 0, 0, 0, 1, 0, 0, 0, 0, 0}, {0, 0, 0, 1, 0, 0, 0, 0, 0, 0},
>     {0, 0, 0, 0, 0, 0, 0, 0, 1, 0}, {0, 1, 0, 0, 0, 0, 0, 0, 0, 0},
>     {0, 0, 0, 0, 0, 0, 0, 0, 0, 1}, {0, 0, 0, 0, 0, 0, 1, 0, 0, 0},
>     {1, 0, 0, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0},
>     {0, 0, 0, 0, 0, 0, 0, 1, 0, 0}}}}
> 
> 
> Daniel Lichtblau
> Wolfram Research

I neglected some finer points both from theory and practice.

First, the theory does not guarantee that solving a relaxed LP problem 
will give a solution to the ILP (integer linear program). This is 
because there may be a tie, in which case LP is not guaranteed to find 
an extremal solution. I'll come back to this in a moment.

The main practical issue I neglected to address is that the default LP 
method is not optimally fast for this class of problems. Let's look at 
the 20x20 case.

In[4]:= mm = mat[20,20];

In[5]:= Timing[{m1,p1} = maxPermutationSum[mm];]
Out[5]= {3.80842 Second, Null}

If we use the interior point method we get substantial improvement.

In[6]:= SetOptions[LinearProgramming,Method->InteriorPoint];

In[7]:= Timing[{m2,p2} = maxPermutationSum[mm];]
Out[7]= {0.064991 Second, Null}

In[8]:= m1
Out[8]= 378

In[9]:= m1===m2
Out[9]= True

It is interesting to obeserve that the solution was not unique in this 
case, and in fact the two methods found different permutations.

In[10]:= p1===p2
Out[10]= False

One moral to glean from this is that even when the LP has multiple 
optimal points, the methods seem to find solutions at extremal ones. For 
the simpelx method this is no surprise, because it works with extremal 
points. I do not know why the interior point method is finding an 
extremal solution.

Were this not finding extremal points, the next thing to try would be a 
branch-and-bound loop to enforce the 0-1 conditions. This can be done 
with a modicum of code (I know this because that's how I first did it, 
before I realized the relaxed LP solutions would also solve the ILP). 
Another potential problem is that machine arithmetic is used behind the 
scenes. This can in particular be an issue if the matrix entries are 
large (or approximate), so that the LP code perceives "ties" due to 
truncation error. One can attempt to control for this using the 
Tolerance option of LinearProgramming.

I'll note that the interior point method gives a solution to the 100x100 
case in under 10 seconds. I think that's not bad for a problem with 
10000 variables.

Daniel Lichtblau
Wolfram Research


  • Prev by Date: Re: NMinimize InitialPoints BUGREPORT ?
  • Next by Date: Hardware question
  • Previous by thread: Re: Maximising a sum of matrix elements
  • Next by thread: Re: Maximising a sum of matrix elements