Re: Why this function does not return a single value
- To: mathgroup at smc.vnet.net
- Subject: [mg60484] Re: Why this function does not return a single value
- From: Marek Bromberek <marekbr at gmail.com>
- Date: Sat, 17 Sep 2005 02:32:27 -0400 (EDT)
- References: <dgbf0u$fot$1@smc.vnet.net>
- Sender: owner-wri-mathgroup at wolfram.com
Hi Bill Thanks for Your reply and for being so patient. I have no problems fitting a set of 15 Gaussian or Lorentzian peaks together with the "hump" and the exponential wing on the low 2theta side. That fit converges rapidly after I give it a good set of initial values. It involves 50 adjustable parameters. I was very pleased with how Mathematica handled that problem. I define the sum of 16 Gaussian peaks GaussSumNew[x_] := Sum[a[i]*Exp[(-Log[2.])*((x - b[i])^2/c[i]^2)], {i, 1, 16}] And then I do Peaks = NonlinearFit[MyData, GaussSumNew[x] + a[17]*Exp[-x/b[17]], {x}, {{a[1], 2429.}, {a[2], 609.}, {a[3], 459.}, {a[4], 1009.}, {a[5], 2695.}, {a[6], 7271.}, {a[7], 1432.}, {a[8], 1308.}, {a[9], 2998.}, {a[10], 3775.}, {a[11], 3775.}, {a[12], 1001.}, {a[13], 325.}, {a[14], 563.}, {a[15], 1119.}, {a[16], 5317.}, {a[17], 12359.}, {b[1], 5.45}, {b[2], 9.81}, {b[3], 11.08}, {b[4], 13.89}, {b[5], 14.87}, {b[6], 16.92}, {b[7], 18.1}, {b[8], 19.51}, {b[9], 22.18}, {b[10], 22.8}, {b[11], 23.76}, {b[12], 25.89}, {b[13], 29.77}, {b[14], 30.93}, {b[15], 34.01}, {b[16], 28.35}, {b[17], 1.67}, {c[1], 0.5}, {c[2], 0.5}, {c[3], 0.5}, {c[4], 0.5}, {c[5], 0.5}, {c[6], 0.5}, {c[7], 0.5}, {c[8], 0.5}, {c[9], 0.5}, {c[10], 0.5}, {c[11], 0.5}, {c[12], 0.5}, {c[13], 0.5}, {c[14], 0.5}, {c[15], 0.5}, {c[16], 15.}}] In the above fit I add the exponential background and MyData is a list of {x,y} pairs of numbers Imported from my data file. Analysis of FitResiduals riveled that some information was not accounted for by this model. Basically the Autocorrelation Function for the residuals is not just noise. There is a structure to it and I think that neither Gaussian nor Lorentzian does represent the shape of my peaks well. Therefore I decided to try the Voigt function and run into some difficulties. However the same approach as presented above does not work with Voigt function because it involves NIntegrate in the definition of my function. Now in my last post I said that I adjusted the parameters so that resulting plot gives You some idea of how my data looks. The representation is far from perfect. I found that initial values manually by first reading the peak positions from the graph and then changing manually the intensity of each and every peak so that it comes close to experimental values when I plot the data points together with the function. And I do not intend to do that for every data file I have to analyze this way and I just want to know how to do this. I am fully aware that NonlinearFit with that many parameters is non trivial. However I am willing to try it and I think that with a good set of initial parameters I will be successful. Best Regards Marek Bill Rowe wrote: > On 9/14/05 at 3:27 AM, marekbr at gmail.com (Marek Bromberek) wrote: > >>If You run the code from my previous post You will see how my data >>looks like because I adjusted the parameters in the lists a, b, >>\[Delta]L and \[Delta]G so that it represents my data quite well. >>However I want to use NonlinearFit (or FindFit) to find the values of >>those parameters which best represent my data. > > We are clearly not communicating effectively. I did run your code and was > able to see how the function plots. But this gives me essentially no > information as to what to suggest. > > When I asked about your data, I was not interested in a plot of your data > but the structure of the data files. For example, if I were trying to > correlate temperature measurments made in Celsius to temperature > measurments in Fahrenheit, I might have a file that looked like > > 50, 122.5 > 60, 140.1 > 70, 158 > 80, 176.2 > 90, 194.3 > > Now since I know there is a linear relationship between Celsius and > Fahrenheit I would use FindFit as follows: > > FindFit[data, a*T + b, {a, b}, T] > > to get > > {a -> 1.797, b -> 32.43} > > Here data is an array that contains values of the independent variable as > well as the response. And in this particular example, data is set equal > to: > > {{50, 122.5}, {60, 140.1},{70, 158}, {80, 176.2}, > {90, 194.3}} > > A non-linear problem or multi-dimensional problem is handled identically. > That is I have a data matrix where every row is the values for the > independent variables followed by the response, an expression that says > how the response is obtained from the independent variables, a list of the > symbols used as independent variables and finally the symbol used as the > response variable. > > There is no limit to how complex I choose to make the expression relating > independent variables to response save memory, my patience and the number > of distinct data points in the data set. > > For example, if I had a suitable number of data points I could have chosen > as my expression relating Celsius to Fahrenheit as say > > a*T + T^b + c Log[T] +d > > Obviously, this will not result in a meaning fit. And in this particular > case, FindFit may not be able to achieve convergence. > > In your case, the expression showing the relationship between a single > independent variable and response could be written as > > Voigt[x,a1,b1,dL1,dG1] + Voigt[x,a2,b2,dL2,dG2] + ... > > and the corresponding paramater list would be > > {a1,b1,dL1,dG1,a2,b2,dL2,dG2,....} > > Here I am assuming a data matrix with two columns, the first representing > values for a single dependent variable and the second column the response. > FindFit can be used with more than one independent variable without > difficultly. > > However, you described a desire to fit a sum of 15 non-linear functions to > the data. And it is beginning to sound like for each of these you need to > optimize possibly 4 independent parameters. If I have this correct, then > you are likely to be quite fustrated with either FindFit or NonlinearFit. > > Fitting non-linear models to data is inherently a difficult problem And > the difficulty increases rapidly with the number of non-linear parameters > that need to be found. With a large number of parameters ot be found, it > will be very difficult to ensure the algortithm used doesn't get trapped > by a local minimum giving a sub-optimum solution. Of course, this issue > can be largely mitigated by giving FindFit a good set of starting points. > > One final comment. You say above you adjusted the paramters to give a good > fit to your data. If so, why bother with either FindFit or NonlinearFit at > all? What FindFit will do is provide you with another estimate as to the > values of the parameters you've effectively already estimated by some > other means. The values provided by FindFit or NonlinearFit are likely to > be different than the values you've already obtained and *might* be > considered "better" under some very specific conditions which often aren't > met by real data. -- To reply via email subtract one hundred and four