Re: NonlinearRegress and errors on parameter fit
- To: mathgroup at smc.vnet.net
- Subject: [mg78330] Re: NonlinearRegress and errors on parameter fit
- From: Bill Rowe <readnewsciv at sbcglobal.net>
- Date: Thu, 28 Jun 2007 04:31:56 -0400 (EDT)
On 6/27/07 at 5:41 AM, alan.zablocki at gmail.com wrote:
>Could someone confirm whether EstimatedVariance is an error on the
>value fitted to a parameter using NonlinearRegress?
No, the estimated variance is not an error bound on the
estimated parameters. When you do a regression analysis you
assume the data is of the form model + error. The estimated
variance is the estimate of the variance for the error term,
that is the stuff left after you subtract out the model being fitted.
>Example:
>In[20]:= << NonLinearRegression`
>In[26]:= data = {{0, -1}, {2, 0}, {4, 1}}
>Out[26]= {{0, -1}, {2, 0}, {4, 1}}
>In[27]:= NonlinearRegress[data, a x + b, {a, b}, x]
>Out[27]= {BestFitParameters -> {a -> 0.5, b -> -1.},
>EstimatedVariance -> 1.35585*10^-31
The model you are fitting in the example above is linear in both
parameters a and b. When you are fitting a linear model, it is
definitely better to use linear regression rather than
non-linear regression.
>I have shown all the working and results. Lastly why only one error
>on both a and b?
>If this is not the error on a and b, how can I obtain it?
Estimates for confidence bounds of the parameters are usually
computed using Student's T statistics and the estimated
variance. I don't recall the exact formula and I am not at my
desk where I could easily look it up. My suggestion though would
be to get a good text on regression analysis which will cover
this and many other issues of importance when doing this kind of analysis.
--
To reply via email subtract one hundred and four