Re: runs test once more
- To: mathgroup at smc.vnet.net
- Subject: [mg57302] Re: [mg57284] runs test once more
- From: János <janos.lobb at yale.edu>
- Date: Tue, 24 May 2005 05:12:39 -0400 (EDT)
- References: <d6k8ia$q2$1@smc.vnet.net> <200505210641.CAA16452@smc.vnet.net> <200505230621.CAA04079@smc.vnet.net>
- Sender: owner-wri-mathgroup at wolfram.com
On May 23, 2005, at 2:21 AM, Csukas Attila wrote: > Dear Everybody, > > I have checked the runs test, and it seems I have missed one step as a > preparation. Now I have arranged all the observed and predicted values > to the third column (third in the brackets) to the second column I > inserted 1 for the observed values, 2 for the predicted values by the > model (second in the brackets), and the first column contains the id > for each case. For each id I would like to know whether there is a > systematic bias between observed and predicted values or not. > Basically, I am expecting nonsignificant runs test result, because > that > would prove me, that the model fitted well. I hope could make clear my > question now and would appreciate any help, hint and comment on the > matter. > > Thank you in advance! One of the beginners.. > > Attila Csukas > > Here is the new data : > > {{id, run, obspred}, {2, 1, 116.9}, {2, 1, 122.1}, {2, 1, 126.1}, > {2, 1, > 131.1}, {2, 1, 137.1}, {2, 1, 141.1}, {2, 1, 148.3}, {2, 1, > 161.2}, {2, 1, 165.9}, {2, 1, 167.8}, {2, 1, 168}, {2, 1, > 170.1}, {2, 2, 116.486}, {2, 2, > 122.073}, {2, 2, 127.074}, {2, 2, 131.598}, {2, 2, 135.899}, > {2, 2, > 140.88}, {2, 2, 149.053}, {2, 2, 160.338}, {2, 2, > 166.697}, {2, 2, 168.316}, {2, 2, 168.617}, {2, 2, 168.67}, {4, > 1, 120.8}, {4, 1, 128.2}, {4, 1, 134.5}, {4, 1, 138.9}, {4, > 1, 145.2}, {4, 1, 153.7}, {4, 1, 163.7}, {4, 1, 170.1}, {4, 1, > 172.1}, {4, > 1, 174.4}, {4, 1, 177.3}, {4, 1, 177.3}, {4, 2, 121.477}, {4, 2, > 127.612}, {4, 2, 133.438}, {4, 2, 139.369}, {4, 2, 146.045}, > {4, 2, > 154.016}, {4, 2, 162.616}, {4, 2, 169.62}, {4, 2, 173.697}, {4, > 2, 175.536}, {4, 2, 176.255}, {4, 2, 176.518}, {7, 1, 111.8}, {7, > 1, 115.5}, {7, 1, 122.1}, {7, 1, 126.8}, {7, 1, 132.4}, {7, > 1, 136.4}, {7, 1, 139.1}, {7, 1, 145.3}, {7, 1, 152.1}, {7, > 1, 161}, {7, 1, 163.2}, {7, 1, 164.1}, {7, 2, 110.578}, {7, > 2, 116.887}, {7, 2, 122.456}, {7, 2, 127.377}, {7, 2, > 131.743}, {7, 2, 135.698}, {7, 2, 139.616}, {7, 2, 144.669}, > {7, 2, > 152.685}, {7, 2, 160.428}, {7, 2, 163.488}, {7, 2, 164.176}} > > > On 2005/05/21, at 15:41, Ray Koopman wrote: > > >> Csukas Attila wrote: >> >>> Dear Everybody, >>> >>> I am facing to a new problem and if it is possible would like >>> to ask some help from experts as you are. >>> I have observed values (second in the brackets) for three ids >>> (first in the brackets) and also have predicted values (third >>> in the brackets) estimated by a model. I would like to use runs >>> test to prove that the model fitted well, that is there is no >>> significant difference between observed and predicted values >>> for each id. >>> >>> Does anybody know how can it be done? Any help is appreciated! >>> Thanks in advance! >>> >>> Out[10] = {{id,obs,pred}, >>> {2,116.9,116.486}, {2,122.1,122.073}, {2,126.1,127.074}, >>> {2,131.1,131.598}, {2,137.1,135.899}, {2,141.1,140.88 }, >>> {2,148.3,149.053}, {2,161.2,160.338}, {2,165.9,166.697}, >>> {2,167.8,168.316}, {2,168. ,168.617}, {2,170.1,168.67 }, >>> {4,120.8,121.477}, {4,128.2,127.612}, {4,134.5,133.438}, >>> {4,138.9,139.369}, {4,145.2,146.045}, {4,153.7,154.016}, >>> {4,163.7,162.616}, {4,170.1,169.62 }, {4,172.1,173.697}, >>> {4,174.4,175.536}, {4,177.3,176.255}, {4,177.3,176.518}, >>> {7,111.8,110.578}, {7,115.5,116.887}, {7,122.1,122.456}, >>> {7,126.8,127.377}, {7,132.4,131.743}, {7,136.4,135.698}, >>> {7,139.1,139.616}, {7,145.3,144.669}, {7,152.1,152.685}, >>> {7,161. ,160.428}, {7,163.2,163.488}, {7,164.1,164.176}} >>> >>> One of the mathgroup contributors had comments on Wald-Wolfowitz >>> test but I have difficulties with the application for the above >>> data. [...] >>> >> >> Me too. Runs of *what* ? >> >> > Here is a naive approach. I go just for the look. First I separate the the observed from the predicted. 'lst' is your new list: In[111]:= lstobs = Select[lst, #1[[2]] == 1 & ]; lstpre = Select[lst, #1[[2]] == 2 & ]; separate the runs: In[113]:= lstobsruns = (#1[[All,3]] & ) /@ Split[lstobs, #1[[1]] - #2[[1]] == 0 & ]; lstpreruns = (#1[[All,3]] & ) /@ Split[lstpre, #1[[1]] - #2[[1]] == 0 & ]; create the Fourier of them: In[115]:= frlstobs = Fourier /@ lstobsruns; frlstpre = Fourier /@ lstpreruns; Separate the Real and Imaginary parts: In[129]:= reimfrlstobs = Thread /@ ({Re[#1], Im[#1]} & ) /@ frlstobs; reimfrlstpre = Thread /@ ({Re[#1], Im[#1]} & ) /@ frlstpre; combined them in such a way that one run's observed and predicted are with each other: In[131]:= listplotit = Flatten[ Thread[{reimfrlstobs, reimfrlstpre}], 1]; And ListPlot them: In[132]:= (ListPlot[#1, PlotJoined -> True] & ) /@ listplotit Then just by looking at them you can see if the match is good enough or not. It looks to me that all the observed runs have more "wiggle" in them than in the predictions :) János
- References:
- Re: runs test for evaluation of model fit
- From: "Ray Koopman" <koopman@sfu.ca>
- runs test once more
- From: Csukas Attila <attila@biking.taiiku.tsukuba.ac.jp>
- Re: runs test for evaluation of model fit