MathGroup Archive 2009

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Distribution of state occupancies in a multistate Markov model

  • To: mathgroup at
  • Subject: [mg104449] Re: Distribution of state occupancies in a multistate Markov model
  • From: mathlawguy <chandler.seth at>
  • Date: Sun, 1 Nov 2009 03:57:27 -0500 (EST)
  • References: <hc92bk$edu$> <hce3vp$r1u$>

On Oct 30, 2:18 am, fd <fdi... at> wrote:
> Seth.
> Well, I'm not sure if my approach is the one that suits what you need,
> but let me try.
> I would start looking if the chapman-kolmogorov equation can be
> analytically solved
> P(x1,t1|x0,t0)=Integrate P(x1,t1|x2,t2)P(x2,t2|x0,t0) dx2
> integrate over all possible states x2. Depending on your transition
> matrix, which are the P(x1,t1|x2,t2)'s, you may be able to perform
> this integration..
> please, take a look in Gardiner's book I mention below..on mackay's
> book there are a few comments on chap 30..
> refs:
> David J. C. MacKay, Information Theory, Inference and learning
> algorithms, Cambridge
> You can find a copy of this book on-line if you agree not to print
> it..
> another good reference is the book by the new zealander physicist, C.
> W. Gardiner, Handbook of Stochastic methods
> good luck

Thanks for the ideas. I will definitely check out the references
you've provided. In the meantime, I think I have made some definite
headway on my problem.

Let P be the sequence of possibly heterogeneous stochastic matrices.

We can define Phi as the sequence of their dot products.


I believe Phi[[n]][[i,j]] now represents the probability that if you
start at state i you will be in state j at time n

OK. So we now define a sequence of Bernoulli random variables with
probability Phi[[m]][[i,j]] as m runs from 1 to n.  If we take the
convolution of these Bernoulli random variables, we get the
probability distribution for the number of times we will have been in
state j having started at state i and having made a total of n
transitions. In Mathematica, it appears that one can do this by taking
the CharacteristicFunction of the Bernoulli random variables,
multiplying them and then taking the Inverse Fourier Transform with
FourierParameters->{1,1}.  Here's the actual code I'm using.

dist[P : {___?MatrixQ}, {i_Integer,
 Times @@ Map[
    CharacteristicFunction[BernoulliDistribution[#[[i, j]]],
      t] &, \[CapitalPhi][P]] // Expand, t, x,
 FourierParameters -> {1, 1}]

When I do this, I end up with an expression involving a bunch of
DiracDeltas, which is what one would expect. Moreover the process
appears pretty fast. I've develop an analytic expression for the
coefficients in front of the DiracDeltas involving binomials but for
some reason it takes longer to compute them than it does to just use
the method described above.

Moreover, if one wants to compute the probability distribution of the
number of times the process will have been in some set of states J,
one sets the parameters of the Bernoulli random variables as Sum[Phi
[[n]][i,j]], with j an element of J and then performs a similar
computation. Here's the code:

dist[P : {___?MatrixQ}, {i_Integer, j : {__Integer}}] :=
  Times @@ Map[
     CharacteristicFunction[BernoulliDistribution[Total[#[[i, j]]]],
       t] &, \[CapitalPhi][P]] // Expand, t, x,
  FourierParameters -> {1, 1}]

If anyone thinks I am doing this incorrectly or has a better/faster
idea, please feel strongly encouraged to let me know as I am about to
base some important computations on the idea.

  • Prev by Date: Re: Re: Wrong Simplify[] Answer for
  • Next by Date: DateListBarChart / Ticks in BarChart
  • Previous by thread: Re: Re: Wrong Simplify[] Answer for
  • Next by thread: DateListBarChart / Ticks in BarChart