MathGroup Archive 2003

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: 3D plot of error function of neural network

  • To: mathgroup at smc.vnet.net
  • Subject: [mg42483] Re: 3D plot of error function of neural network
  • From: "AngleWyrm" <no_spam_anglewyrm at hotmail.com>
  • Date: Thu, 10 Jul 2003 03:37:09 -0400 (EDT)
  • References: <bdefm2$dl6$1@smc.vnet.net> <be5tkv$ja3$1@smc.vnet.net>
  • Sender: owner-wri-mathgroup at wolfram.com

 "Bastian" <baspo at web.de> wrote in message news:bdefm2$dl6$1 at smc.vnet.net...
> Therefore I need a function which produces a 3D plot like a
> uncoordinated function without any jumps, but unfortunately I've tried
> some functions without a "good" results.

Further exploration of the desired error surface produced some excellent
interesting results. The short answer: Try using vector distances, as
measured with
Pythagorean's theorem.

The long answer (with cool looking graph):
We have a set of input values, a set of desired outcomes, and wish to view
the error surface formed by tuning the weights. Let's use the previous post
as a running example. We have a two-perceptron neural network undergoing
supervised training. The output is to be trained to respond to three
different two-input states by outputting a specific value for each one. So
let's list the three input states, and make a list of the corresponding
outputs:

inputs = { {1, 2}, {1, 4}, {2, 3} };
desiredOutput = {1.0, 2.0, 1.5 };

The output generated by perceptrons is usually calculated as the sum of the
inputs times their weights. This can also be written as the dot-product of
the matrix { {inputPair}, {firstWeight, secondWeight} }. We can extract one
pair of input values from our list with inputs[[i]], and use variables for
the weights like so:

output = Table[ Dot[ inputs[[i]], {w1, w2}], {i, Length[inputs]}];

Now consider the output and the desiredOutput as coordinates; we can take
the euclidean distance between them to be the error:

error= Sqrt[ desiredOutput^2 - output^2 ]

Add together all three distance differences, and you have a usable metric
for total error!

errorFunction = Apply[Plus, Sqrt[ desiredOutput^2  - Table[ Dot[inputs[[i]],
{w1, w2}], {i, Length[inputs]}]^2 ]

Here's the complete process, including graph (four lines of text):

inputs = { {1, 2}, {1, 4}, {2, 3} };
desiredOutput = {1.0, 2.0, 1.5 };
errorFunction = Apply[Plus, Sqrt[desiredOutput^2 - Table[Dot[inputs[[i]],
{w1, w2}], {i, Length[inputs]}]^2]]
Plot3D[errorFunction, {w1, -2, 2}, {w2, -2, 2}, PlotPoints -> 100]


  • Prev by Date: Re: solving diffusion equation
  • Next by Date: Re: LatticeReduce extension
  • Previous by thread: Re: 3D plot of error function of neural network
  • Next by thread: Startup Commands