Re: optimization routine using conjugate gradient (or other) method?
- To: mathgroup at smc.vnet.net
- Subject: [mg83754] Re: [mg83642] optimization routine using conjugate gradient (or other) method?
- From: DrMajorBob <drmajorbob at bigfoot.com>
- Date: Thu, 29 Nov 2007 06:40:32 -0500 (EST)
- References: <9889841.1196176137975.JavaMail.root@m35>
- Reply-to: drmajorbob at bigfoot.com
My result seems close to Daniel Lichtblau's, although I've rewritten the
code:
Clear[a, h, ahmodT, sqSum, func]
h[t_][x_List] := h[t] /@ x
h[t_][x_] = (1/33 UnitStep[a, 64.5 - a]
Piecewise[{{UnitStep[32.25 - a],
t < 127.75}, {0.5 (UnitStep[32.25 - a] +
UnitStep[32.25 - (t - 127.75) - a] +
UnitStep[a - 64.5 + (t - 127.75)]), 127.75 <= t < 160}},
0.5 (UnitStep[32.25 - a] + UnitStep[a - 32.25])] //
Rationalize // PiecewiseExpand // FullSimplify) /. a -> x;
ahmodT[a_List][t_] := a.h[t]@Mod[t - Range@160, 160]
sqSum[a_List] := #.# &[m - ahmodT[a] /@ time]
func[a_] :=
Module[{an = Normalize[a, Total]}, 50 an.Log[an] + 1/2 sqSum[a]
]
time = {0, 12, 24, 36, 48, 60, 84, 96, 108, 120, 132, 144, 156};
m = {0.149, 0.223, 0.721, 2.366, 2.580, 5.904, 7.613, 11.936, 11.943,
8.477, 6.683, 4.641, 4.238};
a = Array[f, 160];
The objective function is VERY complicated:
func[a] // LeafCount
107403
But NMinimize finishes in under 81 seconds:
constraints = Thread[a > .0001];
Timing[{obj1, rule1} = NMinimize[{func[a], constraints}, a];]
obj1
a1 = a /. rule1;
{lo, hi} = Through[{Min, Max}@a1]
{80.89, Null}
-237.938
{0.811749, 15.0187}
A random search doesn't find anything better (or even nearly as good):
randomStart := RandomReal[{.001, 16}, Length@a]
Timing[Table[func[randomStart], {5000}] // Min]
{142.891, -158.832}
But that's a tiny sample considering the dimensions!!
I also tried FindMinimum, but it seemed to take FOREVER, so I aborted.
Bobby
On Tue, 27 Nov 2007 05:08:26 -0600, dantimatter <dantimatter at gmail.com> =
=
wrote:
> Hello all,
>
> I have a system that I'd like to optimize in 160 dimensions. I have
> all the equations I need to set up the problem, but I have no idea how=
> to use Mathematica to actually do the optimization. You've all been
> so helpful in the past, I'm hoping maybe some of you have suggestions
> as to how I should proceed. My problem is as follows:
>
> I have an array of 160 numbers a = Array[f, 160]. I don't know any =
of
> the a[[j]] but I'd like to determine them through the optimization
> process. The sum of all elements in the array is atot = Sum[a[[j]],=
> {j, 160}].
>
> I also have two fixed arrays, time and m, where time = {0, 12, 24, 3=
6,
> 48, 60, 84, 96, 108, 120, 132, 144, 156} and m = {0.149, 0.223, 0.72=
1,
> 2.366, 2.580, 5.904, 7.613, 11.936, 11.943, 8.477, 6.683, 4.641,
> 4.238}.
>
> Lastly, I have a somewhat-complicated "time shift function" h[a_,
> t_] :=
> 1/33 UnitStep[a] UnitStep[
> 64.5 - a] Piecewise[{{UnitStep[32.25 - a] ,
> t < 127.75}, {0.5 (UnitStep[32.25 - a] +
> UnitStep[32.25 - (t - 127.75) - a] +
> UnitStep[a - 64.5 + (t - 127.75)]), 127.75 <= t < 160}},
> 0.5 (UnitStep[32.25 - a] + UnitStep[a - 32.25])]
>
> Now with all this stuff set up, the function I'd like to minimize is
>
> func = 50 Sum[(a[[j]]/atot) Log[a[[j]]/atot], {j, 160}] + 1/2
> Sum[(m[[i]] -
> Sum[a[[j]] h[Mod[time[[i]] - j, 160], time[[i]]], {j, 160}])^2,
> {i, Length[time]}]
>
> The problem is complicated by the fact that the conjugate gradient
> optimization routine may go through a point where the a[[j]] have
> negative values, which makes it impossible to compute Log[a[[j]]].
> So, an alternative formula is used instead of the a[[j]]/atot)
> Log[a[[j]]/atot]:
>
> 1/atotp (eps Log[eps/atotp] - (eps - a[[j]]) (1 + Log[eps/atotp]) +
> (eps - a[[j]])^2/(2 eps))
>
> in this case, 'eps' is a predefined small positive number and 'atotp'
> is the sum of a[[j]] such that a[[j]] is equal to or greater than
> 'eps'.
>
> So, does anyone have any suggestions as to how I could code this in
> Mathematica? Is it something that's even possible? Please let me
> know if you have any ideas.
>
> Many thanks,
> Dan
>
>
-- =
DrMajorBob at bigfoot.com