MathGroup Archive 2010

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Dynamic evaluation of layered networks

  • To: mathgroup at
  • Subject: [mg109353] Re: Dynamic evaluation of layered networks
  • From: Albert Retey <awnl at>
  • Date: Fri, 23 Apr 2010 03:47:56 -0400 (EDT)
  • References: <hqmok8$4ch$>


> I want to implement a multilayer feedforward network in such a way that 
> changing value(s) in the input layer automatically causes reevaluation of 
> only those parts of the network that are involved in the display of a 
> result, and it seems to me that Dynamic does what I want.

I'm not sure whether I really understand what you want, but in general I
think it is a very bad idea to use Dynamic to implement the control flow
of your program. It is not meant to be used for that and since your
concern seems to be efficiency I think you should keep in mind that a
Dynamic lives in the FrontEnd. This means that there is extra overhead
for the information exchange between the frontend and the kernel: I
doubt that what you would achieve this way would be very fast...

> A simple 3-layer example would be:
> f1[x_] = x^2;
> f2[x_] := f1[x] + 1;
> f3a[x_] := f2[x] + 1;
> f3b[x_] := f2[x] - 1;
> Dynamic[{f3a[2], f3b[2]}]
> Any subsequent change to the f1[x_] = definition in the input layer 
> automatically causes the above Dynamic to reevaluate.
> That's fine, except that this causes f2[x] to be evaluated twice, once for 
> f3a[x] and once for f3b[x], which would be inefficient when generalised to 
> very large layered networks in which the f functions are costly to evaluate. 
> Unfortunately, the f2[x]:=f2[x]=... trick for memorising previously 
> evaluated results doesn't help us here because it prevents the Dynamic from 
> being sensitive to changes in the f1[x_] = definition in the input layer.
> There are messy ways of programming around this problem (e.g. using the 
> memorisation trick but modified so you forget memorised results that are 
> "out of date"), but is there any solution that finesses the problem by 
> cleverly using Mathematica's evaluation engine?

I think relying on the standard memoization trick is your best bet, and
it can be implemented in a rather simple way. Note that I have defined
the output function as downvalues of just one symbol and use the
convention that the first argument is the layer number, the second is
the node number. I have also enclosed Prints to demonstrate that the
functions are indeed only called once per input. Here is the example:

initializeNetwork[f_Symbol] := (
  f[1, 1, x_] := f[1, 1, x] = (Print[{1, 1}]; x^2);
  f[2, 1, x_] := f[2, 1, x] = (Print[{2, 1}]; f[1, 1, x] + 1);
  f[3, 1, x_] := f[3, 1, x] = (Print[{3, 1}]; f[2, 1, x] + 1);
  f[3, 2, x_] := f[3, 2, x] = (Print[{3, 2}]; f[2, 1, x] - 1);

Now you would first initialize the network, then you could ask it for
results. If you are done you might want to Clear the network symbol
since it will accumulate quite a few DownValues:


out[3, 2, 0.5]

out[3, 1, 0.5]




  • Prev by Date: Re: Header and Footer Number of Pages
  • Next by Date: Re: Bug in Mathematica ?
  • Previous by thread: Re: Dynamic evaluation of layered networks
  • Next by thread: Re: Dynamic evaluation of layered networks