       Re: Maximum Likelihood Problem

• To: mathgroup at smc.vnet.net
• Subject: [mg46456] Re: Maximum Likelihood Problem
• From: drbob at bigfoot.com (Bobby R. Treat)
• Date: Thu, 19 Feb 2004 03:02:09 -0500 (EST)
• References: <c0t2ii\$2t9\$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com

```I'm not familiar with some of your terminology, so I did this my own
way. Maybe it will help you, even if I messed up somewhere.

First of all, I wrote your model as you seem to really be using it:

Clear[s, t, m, normal, u, dist]
dist = NormalDistribution[0, 1];
u[x_] := x(1 - b t) + s(t^(1/2))*normal
u[x]

The important terms here (for known parameters) is a constant times x
plus a constant times the normal variate. Introducing ss and bb to
name those constants, if I could estimate them I could invert the
transformation to get s and b:

invert = First[Solve[{ss == s*Sqrt[t], bb == 1 - b*t}, {s, b}]]
{s -> ss/Sqrt[t], b -> (1 - bb)/t}

So I'll rewrite the model as

Clear[v]
v[x_]:=x bb+ss*normal

That gives the next data point v[x] given the current data point x.
Here's a set of generated data:

data = Block[{t = 1/12, b = 10, s = 1, bb, ss,
normal := Quantile[dist, Random[]]},
bb = 1 - b*t; ss = s*Sqrt[t];
NestList[v, 0.2, 100]];
ListPlot[data];

Being a little rusty, I derive the conditional density:

Pr[(v - bb*x)/ss <= z] == CDF[dist, z]
Pr[v - bb*x <= z] == CDF[dist, ss*z]
Pr[v <= z] == CDF[dist, ss*z + bb*x]
F[v] == CDF[dist, ss v + bb x]
D[#, v] & /@ %

logLikelihood[bb_, ss_][x_, v_] =
PowerExpand[Log[ss/(E^((1/2)*(ss*v + bb*x)^2)*
Sqrt[2*Pi])]]

Summing over a collection of data looks like this:

logLikelihood[bb_, ss_][data_List] :=
Plus @@ logLikelihood[bb, ss] @@@ Partition[data, 2, 1]

Finally, solving for the maximum likelihood goes like this:

logLikelihood[bb, ss][data] // Simplify
D[%, #] == 0 & /@ {bb, ss}
Solve[%]
{s, b} /. invert
% /. %%
% /. t -> 1/12

Hope that helps.

Bobby

sabrinacasagrande at hotmail.com (sabbri) wrote in message news:<c0t2ii\$2t9\$1 at smc.vnet.net>...
> I need to generate an algorithm in order to estimate the parameters of
> an equation . It's probably a mistake in my codes that prevents me to
> get the right results.
> DATA (Sample Path generating): I generated the data using an
> approximation of the differential equation dXt = - B Xt dt + s dWt:
>
> u[x_]:= x + m[x]*t + s[x]*
> ((t)^(1/2))*(Quantile[NormalDistribution[0,1],Random[]])
>
> where I state m[x_]= -10 x, and s[x_]= 1, t=1/12 Then I generate the
> data with Table[NestList[u,0.2,100],{1}]]. So I have my data with the
> true parameters value, B=10 and s=1.
> ALGORITHM: I use an approximation (of the transition density
> probability function). I constructed the approximation based on the
> saddle points method (or Laplace Method).
>
> sigx[x_] := s
>
> mx[x_] := -B x
>
> g[x_] = Integrate[(1/sigx[u]), {u, 0, x}]
>
> sigy[x_] = 1
>
> my[x_] = (mx[x]/sigx[x]) - (D[sigx[x], x]/2)
>
> fi[z_] = (1/Sqrt[2 Pi]) (Exp[-(z^2)/2])
>
> lamy[x_] = (-1/2) (((my[x])^2) + D[my[x], x])
>
> c[y_, x_, j_] := c[y, x, j]
> = ((j)*(y - x)^(-j)) (Integrate[((w - x)^(j - 1))((lamy[w]c[w, x, j -
> 1]) + ((D[c[w, x, j - 1], {w, 2}])(1/2))), {w, x,y}])
>
> c[y_, x_, 0] = 1
>
> py[t_, y_, x_] = (1/Sqrt[t]) (fi[(y - x)/(Sqrt[t])])
> (Exp[Integrate[my[w], {w, x,y}]]) (Sum[c[y, x, k] ((t)^k)/(k!), {k, 0,
> 1}])
>
> px[t_, y_, x_] = (py[t, g[y], g[x]]/sigx[g[x]])
>
> And finally I get a function (the likelihood function)
>
> L[N_] = Sum[Log[px[t, x[n + 1], x[n]]], {n, 1, N}]/N
>
> where x[n_] are the values I generated above. So x[n_]:=
> Extract[data,{1,n}]
> Now I try to maximize it.
> I tried using two methods: 1)the function Maximize
> Maximize[{L,s>0},{B,s}]
>
>  2) Solve[] on the derivatives == 0 of my target function.
> Solve[{D[L,B]==0,D[L,s]==0},{B,s}]
>
> In none of this case I can get the results. (Mind that I generated the
> data attributing the values to parameters, so data were generated with
> the right values: I should obtain those values while maximizing!!!)
> Can you help me? Where am I wrong?

```

• Prev by Date: Re: Computing sets of equivalences
• Next by Date: Re: how to explain this weird effect? Integrate
• Previous by thread: Maximum Likelihood Problem
• Next by thread: Simplify and Gather Terms in a Polynomial