MathGroup Archive 2004

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Re: FindMinimum and the minimum-radius circle


That frequently works, and it's almost as fast as the Fred Simons method.

But it fails on this data:

{{-0.694112,-1.47331},{-2.26344,0.220021},{-0.8867,-1.85288},{0.527656,-2.\
81915},{-1.75879,-0.28898},{
     0.728263,1.20724},{-1.34788,-0.799066},{-0.676629,-1.03418},{1.45111,0.\
961224},{0.914209,-0.539339}}

and this data:

{{1.60034, 0.0176019}, {-0.355331, -1.62872}, {-0.2654, -0.000635396}, \
{-0.777536, 0.890349}}

In both cases, the result circle is too big and passes through two points when it should be three.

I'm using Mean@data as the starting point (as you did), but I'm using the ConvexHull to define goal[x_,y_] and as the third argument of grad.

Bobby

On Thu, 26 Aug 2004 06:51:34 -0400 (EDT), Maxim <ab_def at prontomail.com> wrote:

> Daniel Lichtblau <danl at wolfram.com> wrote in message news:<cg97e9$a5p$1 at smc.vnet.net>...
>> Janos D. Pinter wrote:
>> > Hello mathgroup,
>> >
>> > as stated in previous messages, the minimum-radius circle problem is
>> > convex, but constrained: hence, an unconstrained optimization approach may
>> > or may not work...
>>
>> But it can certainly be made to work on an unconstrained formulation.
>> This takes some nondefault option settings to work around problems that
>> arise because such formulation is not everywhere smooth.
>>
>>
>> > Using the convex hull as a constraint set will work, of
>> > course. NMInimize and other constrained optimizers should then also work,
>> > even in local search mode if they have that option.
>> >
>> > Instead of further philosophical discussions on the subject, why don't we
>> > solve a standardized test model(s), using the same numerical example. (I
>> > assume here that SeedRandom is platform and version independent... as it
>> > should be.)
>> >
>> > Here is a simple model-instance, for your perusal and experiments.
>> >
>> > SeedRandom[1];
>> > points = Partition[Table[Random[], {1000}], 2];
>> > constraints = (#[[1]] - x0)^2 + (#[[2]] - y0)^2 <= r^2 & /@ points;
>> > soln = NMinimize[{r, constraints}, {x0, y0, {r, 0, 10}}]
>> > {0.6628150350002375, {r -> 0.6628150350002375, x0 -> 0.48684175822621106,
>> > y0 -> 0.46146943870460916}}
>> >
>> > timedsoln = NMinimize[{r, constraints}, {x0, y0, {r, 0, 10}}] // AbsoluteTiming
>> > {30.694136`8.938600406559628*Second, {0.6628150350002375,
>> >   {r -> 0.6628150350002375, x0 -> 0.48684175822621106, y0 ->
>> > 0.46146943870460916}}}
>> >
>> > That is (on a P4 1.6 WinXP machine) it takes about 30 seconds to generate
>> > this solution.
>> >
>> > Next, I solve the same model using the MathOptimizer Professional package,
>> > in local search mode:
>> >
>> > Needs["MathOptimizerPro`callLGO`"]
>> > callLGO[r, constraints, {{x0, 0, 1}, {y0, 0, 1}, {r, 0, 1}}, Method -> LS]
>> > // AbsoluteTiming
>> > {6.158856`8.24104504348061*Second,
>> >   {0.662815036, {x0 -> 0.4868417589, y0 -> 0.4614694394, r -> 0.662815036},
>> >    4.770200900949817*^-11}}
>> >
>> > The two solutions are fairly close (the solution methods are different).
>> > The MathOptimizer Professional (LGO) solver time is ~ 6 seconds. The result
>> > also shows that the max. constraint error of this solution is ~
>> > 4.770200900949817*^-11. (This error could be reduced, if necessary by
>> > changing the model and some LGO options.)
>> >
>> > Obviously, one could use also more sophisticated models and other point
>> > sets etc., as long as we all use the same one(s), for objective
>> > comparisons. (The proof of the pudding principle.)
>> >
>> > Regards,
>> > Janos
>> > [...]
>>
>> What about the unconstrained formulation?
>>
>> In[47]:=
>> Timing[NMinimize[Sqrt[Max[Map[(x-#[[1]])^2+(y-#[[2]])^2&,points]]],{x,y}]]
>>
>> Out[47]=
>> {3.54 Second,{0.662815,{x\[Rule]0.486831,y\[Rule]0.461458}}}
>>
>> I note that this can be made faster still by preprocessing to extract
>> the convex hull of the data points. Also one can attain a faster result,
>> with but slight loss of accuracy, using
>>
>> FindMinimum[...,Gradient->{"FiniteDifference","DifferenceOrder" -> 2}]
>>
>> You can do this with arbitrary reasonable starting points but clearly
>> the mean or median values would be good choices.
>>
>> In[52]:=
>> Timing[FindMinimum[
>>          Sqrt[Max[
>>     Map[(x-#[[1]])^2+(y-#[[2]])^2&,points]]],{x,#[[1]]},{y,#[[2]]},
>>     Gradient->{"FiniteDifference","DifferenceOrder"->2}]&[
>>      Mean[points]]]
>>
>> Out[52]=
>> {1.34 Second,{0.663207,{x->0.487394,y->0.461453}}}
>>
>> Experimentation reveals that by taking DifferenceOrder->4 I can recover
>> the NMinimize/MOP result for this particular example, albeit at some
>> loss of speed.
>>
>> Again, this will perform much better if you work with the convex hull.
>> Here is how fast this becomes.
>>
>> Needs["DiscreteMath`ComputationalGeometry`"]
>>
>> In[69]:=
>> Timing[With[{hull=points[[ConvexHull[points]]]},
>>      FindMinimum[
>>            Sqrt[Max[
>>            Map[(x-#[[1]])^2+(y-#[[2]])^2&,hull]]],{x,#[[1]]},{y,#[[2]]},
>>            Gradient->{"FiniteDifference","DifferenceOrder"->2}]&[
>>        Mean[hull]]]]
>>
>> Out[69]=
>> {0.04 Second,{0.66286,{x->0.486905,y->0.461467}}}
>>
>>
>> My thanks to Bobby Treat, Rob Knapp, and Tom Burton for bringing to my
>> attention some of the finer points regarding speed and avoidance of
>> premature convergence by FindMinimum.
>>
>>
>> Daniel Lichtblau
>> Wolfram Research
>
>
> Another method would be to construct the gradient explicitly; I used a
> simple heuristic formula
>
> (D[Max[f1,...,fn], x] /. x->x0) == (D[f1, x] /. x->x0) /;
>   f1 > f2,...,f1 > fn,
> (D[Max[f1,...,fn], x] /. x->x0) == (D[(f1+f2)/2, x] /. x->x0) /;
>   f1 == f2, f2 > f3,...,f2 > fn.
>
> Here's what I get for the same set of points:
>
> In[1]:=
> SeedRandom[1];
> points = Partition[Table[Random[], {1000}], 2];
>
> goal[x_, y_] = Max[Norm[{x, y} - #]& /@ points];
>
> grad = Compile[
>   {x, y, {points, _Real, 2}},
>   Module[
>     {n = Length@points, Ld = Norm[{x, y} - #]& /@ points,
>      Ldd, i1, i2},
>     {i2, i1} = Ordering[Ld, -2];
>     Ldd = ({x, y} - points[[#]])/Ld[[#]]& /@ {i1, i2};
>     If[Chop[Ld[[i1]] - Ld[[i2]]] != 0,
>       Ldd[[1]],
>       (Ldd[[1]] + Ldd[[2]])/2
>   ]],
>   {{_Ordering, _Integer, 1}}
> ];
>
> FindMinimum[goal[x, y],
>   {x, Mean[points][[1]]}, {y, Mean[points][[2]]},
>   Method -> Gradient, Gradient :> grad[x, y, points]]
>
> Out[5]=
> {0.662817, {x -> 0.486845, y -> 0.461469}}
>
> This seems to work quite well for any starting point, but might fail
> if there are triple points where f1[x0]==f2[x0]==f3[x0].
>
> The effect of DifferenceOrder is probably version specific; I tried
> all combinations with different methods like
> FindMinimum[...,Method->Gradient,Gradient->{FiniteDifference,DifferenceOrder->4}]
> but changing DifferenceOrder didn't seem to have any effect.
>
> Maxim Rytin
> m.r at inbox.ru
>
>
>



-- 
DrBob at bigfoot.com
www.eclecticdreams.net


  • Prev by Date: Re: An entropy measure of rational number fractal dimension: d=0.3732201657487591656832916543359
  • Next by Date: Animate
  • Previous by thread: Re: FindMinimum and the minimum-radius circle
  • Next by thread: Re: FindMinimum and the minimum-radius circle