MathGroup Archive 2010

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Dynamic evaluation of layered networks

  • To: mathgroup at smc.vnet.net
  • Subject: [mg109439] Re: Dynamic evaluation of layered networks
  • From: "OmniaOpera" <OmniaOpera at _deletethistext_googlemail.com>
  • Date: Tue, 27 Apr 2010 04:06:33 -0400 (EDT)
  • References: <hqmok8$4ch$1@smc.vnet.net>

Thank you for your responses. I'll gather my response(s) together here:

I already use things like SparseArray, Dot (and lots more besides) and I do 
a complete reevaluation every time something changes. I can get a big gain 
from using Mathematica's sparse linear algebra, but there is a lot more 
reevaluation going on than is actually necessary. The general computational 
architecture that I envisage is lots (eventually, lots might be LOTS) of 
simple interconnected processors updating themselves. A neural network 
package might seem to be the way to go, but I really need the general 
capabilities of Mathematica. Maybe I will eventually farm out (e.g. MathLink 
to C) code that is hostile to efficient implementation in Mathematica.

What I want to do would be logically equivalent to a dynamic presentation, 
and small examples could actually be dynamic presentations. I want to 
produce an output which depends on an arbitrary set of user 
definable/changeable quantities, which could be network inputs, network 
layer-to-layer mappings, etc. However, I wondered whether I could finesse 
the recalculation problem so that Mathematica evaluation engine worked out 
the minimum amount of reevaluation it had to do to update the output. I 
wondered whether displaying an appropriate output could be used to trick 
Mathematica's Dynamic into doing this reevaluation for me. I realise that 
this is probably not what the designers of Dynamic had in mind, but wouldn't 
it be nice if it was a side-effect of their design? If the reevaluations 
that Dynamic asks the Kernel to do are computationally intensive then the 
overhead for FrontEnd-Kernel communications is negligible in comparison.

So, if I insist on not simply reevaluating everything then I am going to 
have to trace through the dependencies myself in order to determine what 
needs to be reevaluated. I am surprised that there isn't already a 
shrink-wrapped Mathematica solution (a cunning Mathematica idom, maybe) to 
this type of partial reevaluation problem.

OO

"OmniaOpera" <OmniaOpera at _deletethistext_googlemail.com> wrote in message 
news:hqmok8$4ch$1 at smc.vnet.net...
>I want to implement a multilayer feedforward network in such a way that
> changing value(s) in the input layer automatically causes reevaluation of
> only those parts of the network that are involved in the display of a
> result, and it seems to me that Dynamic does what I want.
>
> A simple 3-layer example would be:
>
> f1[x_] = x^2;
> f2[x_] := f1[x] + 1;
> f3a[x_] := f2[x] + 1;
> f3b[x_] := f2[x] - 1;
>
> Dynamic[{f3a[2], f3b[2]}]
>
> Any subsequent change to the f1[x_] = definition in the input layer
> automatically causes the above Dynamic to reevaluate.
>
> That's fine, except that this causes f2[x] to be evaluated twice, once for
> f3a[x] and once for f3b[x], which would be inefficient when generalised to
> very large layered networks in which the f functions are costly to 
> evaluate.
> Unfortunately, the f2[x]:=f2[x]=... trick for memorising previously
> evaluated results doesn't help us here because it prevents the Dynamic 
> from
> being sensitive to changes in the f1[x_] = definition in the input layer.
>
> There are messy ways of programming around this problem (e.g. using the
> memorisation trick but modified so you forget memorised results that are
> "out of date"), but is there any solution that finesses the problem by
> cleverly using Mathematica's evaluation engine?
>
> OO
>
> 



  • Prev by Date: Re: Context Problem
  • Next by Date: Re: problems with NMinimize
  • Previous by thread: Re: Dynamic evaluation of layered networks
  • Next by thread: Re: Kernel crash using Z transform, version 7, windows