Re: Weights in NonlinearRegress / NonlinearFit. Versus data errors
- To: mathgroup at smc.vnet.net
- Subject: [mg85342] Re: Weights in NonlinearRegress / NonlinearFit. Versus data errors
- From: dh <dh at metrohm.ch>
- Date: Wed, 6 Feb 2008 06:37:51 -0500 (EST)
- References: <foc2nh$4o9$1@smc.vnet.net>
Hi Kris,
if the errors are normal distributed and sigma is the standard
deviation, then by minimizing the sum:
Sum[ (e[i]/sigma[i])^2,{i}]
you get a maximum likelihood estimate of your fit parameter. Your
colleague seems to be right.
hope this helps, Daniel
Kristof Lebecki wrote:
> Dear all,
>
>
>
> Introduction: it happens that we, physicists have to fit a given function
> y(x) to a set of experimentally derived points {x_i,y_i}, i=1..n. I use for
> that NonlinearRegress.
>
>
>
> It happenes, however, that the experimental points are known with different
> accuracy. (We call usually such inaccuracy as an error.) Assume that we have
> a vector of triples {x_i,y_i,dy_i}, i=1..n. Now, dy_i describes the
> "y-error" of every point we want to fit.
>
>
>
> My first idea was to use Weights for that. Weights defined as:
>
> {w_i}= {1/dy_i} (the smaller the error the higher the weight).
>
> But my experienced colleague pointed out that what actually is here
> minimized is: chi^2= \sum_i w_i*(e_i)^2. He prefers that sum to be
> dimensionless, thus he argues that the weights should be defined as:
>
> {w_i}= {1/(dy_i)^2}
>
>
>
> And what is your opinion?
>
>
>
>
> Regards, Kris
>
>