MathGroup Archive 2010

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: How can I solve semidefinite programming problems? Can any one with

On Dec 19, 4:11 am, fmingu <fmi... at> wrote:
> Hello.
>  I am a student learning signal processing. I need my data to be calculated which needs optimization. And I am using Mathematica for programming. But till now I do not find any functions in mathematica for semidefinte programming. The problem is
>  min C'XC
> s.t. B>=0, W>=0 ,t>=0, X'=X
> where C' ,X' are the transpose of C and X, respectively. C,B,W are  matrices where B is a block matrix containing X. t is a number to be calculated. X is the unknown symmetic matrix to be found. I asked Nathan Brixius, the author of SDPMATH package which is used in Mathematica for semidefinite programming in 90's. But the link to SDPMATH package is missing ( And after I wrote a letter to the author, he said the package is lost.
> How can I solve semidefinite programming problems in mathematica? Are there any innard or build-in methods in mathematica? Can any one with kindness help me?
> Thanks a lot.

You might be able to finesse the positive definiteness constraints, at
least for smallish problems. This can be done by defining a function
that operates only on explicitly numeric matrices and returns the
minimal eigenvalue.

In the code and simple  example below I do not check that the matrix
inputs are square and symmetric, but I enforce that in the way it is
called. This might need adjusting for various usages. I also do not
know whether Eigenvalues will always come up with real values when
given a symmetric input, as it is a numeric algorithm and subject to
numeric error. So I take the real part of the eigenvalues it computes.

In[1]:= n = 5;
cc = RandomInteger[{0, 10}, n];

In[3]:= xvars = Array[x, {n, n}];
Do[x[i, j] = x[j, i], {i, 2, n}, {j, i - 1}]
fvars = Union[Flatten[xvars]];

In[6]:= mineig[mat : {{_?NumericQ ..} ..}] :=
 With[{eigvals = Eigenvalues[mat]}, Min[Re[eigvals]]]

In[7]:= {min, vals} =
 FindMinimum[{, {mineig[xvars] >= 0}}, fvars]

During evaluation of In[7]:= FindMinimum::eit: The algorithm does not
converge to the tolerance of 4.806217383937354`*^-6 in 500 iterations.
The best estimated solution, with feasibility residual, KKT residual,
or complementary residual of
is returned. >>

Out[7]= {-0.000102810847622932, {x[1, 1] -> 0.4745617099801545,
  x[1, 2] -> -0.2366811680109855, x[1, 3] -> -0.2132627776342026,
  x[1, 4] -> -0.2702226619552426, x[1, 5] -> -0.2453269610921312,
  x[2, 2] -> 0.4759415776673994, x[2, 3] -> -0.03653531695016651,
  x[2, 4] -> 0.103983724598686, x[2, 5] -> -0.01760774267839494,
  x[3, 3] -> 0.4587401126362137, x[3, 4] -> -0.0003398923289131131,
  x[3, 5] -> -0.07268674827647269, x[4, 4] -> 0.7191478545392413,
  x[4, 5] -> 0.004481278729767096, x[5, 5] -> 0.4939968048471287}}

We check by how much the constraint is violated. The minimal
eigenvalue is, one rather suspects, a fuzzy zero.

In[10]:= mineig[xvars /. vals]

Out[10]= -4.990836973695068*10^-7

This is very much a "Your mileage may vary" approach. I would not
expect it to be terribly fast. I would not be surprised if it has
difficulty in enforcing the constraint, for some (perhaps most?)
problems. You might try alterations such as adding an explicit penalty
term in the objective function, perhaps altering it based on
StepMonitor option setting, etc. Below is a very simple form of this
idea, probably not so useful in practice.

FindMinimum[{ -
   10^5*mineig[xvars], {mineig[xvars] >= 0}}, fvars]

Daniel Lichtblau
Wolfram Research

  • Prev by Date: Mathematica daily WTF
  • Next by Date: Importing PDF files in version 8 Trial
  • Previous by thread: How can I solve semidefinite programming problems? Can any one with
  • Next by thread: Shift-Ctrl-N and Manipulate