MathGroup Archive 2003

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: 3D plot of error function of neural network

  • To: mathgroup at smc.vnet.net
  • Subject: [mg42461] Re: 3D plot of error function of neural network
  • From: "AngleWyrm" <no_spam_anglewyrm at hotmail.com>
  • Date: Wed, 9 Jul 2003 08:24:33 -0400 (EDT)
  • References: <bdefm2$dl6$1@smc.vnet.net> <be5tkv$ja3$1@smc.vnet.net>
  • Sender: owner-wri-mathgroup at wolfram.com

>"Bastian" <baspo at web.de> wrote in message news:bdefm2$dl6$1 at smc.vnet.net...
> Therefore I need a function which produces a 3D plot like a
> uncoordinated function without any jumps, but unfortunately I've tried
> some functions without a "good" results.

Let's say we have a data set of three different states for our two input
perceptrons, together with initial values for the weights of both those
perceptrons.  If we sum the inputs times their respective weights, we get an
output value for each set of input values:

in:
inputs = {{1, 2}, {1, 4}, {2, 3}};
weights = {.2, .3};
Table[ inputs[[i]] weights, {i, Length[inputs]} ];
Table[  Apply[ Plus, %[[i]]  ], {i, Length[%]} ]

out:
{0.8, 1.4, 1.3}

If we are discussing a supervised learning algorithm, then we have
expectations of a desired outcome for each set of inputs, and the error is
how far from ideal each output is, given the input states. Let us say for
example, that we wish the network to produce an output set of 1.0 in the
first case, when inputs are {1,2}, 2.0 in the second case, when inputs are
{1,4}, and 1.5 in the last example case when inputs are {2,3}. Thus our
ideal set of outputs is { 1.0, 2.0, 1.5 }, and the difference is:

in:
{1.0, 2.0, 1.5 } - %

out:
{0.2, 0.6, 0.2}

If we bring the differences to 0, then this network has memorized the data.
However, memorization and learning are a dichotomy, and thus it might not be
ideal to reach 0.0 on all test data. Nonetheless, let us assume that we are
indeed targetting as close to zero as possible, in order to memorize the
desired outcomes.

Given any specific pair of inputs, we can adjust both weights and observe
the resultant difference between desired outcome and actual outcome. Here
are both a 3-D graph, and a contour plot:

inputs = {1, 2};
desiredOutcome = 1.0;
errorFunction =  desiredOutcome - (inputs[[1]]firstWeight +
inputs[[2]]secondWeight);
Plot3D[errorFunction, {firstWeight, -1, 1}, {secondWeight, -1, 1}]

ContourPlot[errorFunction, {firstWeight, 0, 1}, {secondWeight, 0, 1},
Contours -> 10, ContourShading -> True];


  • Prev by Date: RE: ListConvolve?
  • Next by Date: find root
  • Previous by thread: Re: 3D plot of error function of neural network
  • Next by thread: Re: 3D plot of error function of neural network