You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
1792 lines
80 KiB
1792 lines
80 KiB
<?xml version="1.0"?>
|
|
<doc>
|
|
<assembly>
|
|
<name>AForge.Neuro</name>
|
|
</assembly>
|
|
<members>
|
|
<member name="T:AForge.Neuro.BipolarSigmoidFunction">
|
|
<summary>
|
|
Bipolar sigmoid activation function.
|
|
</summary>
|
|
|
|
<remarks><para>The class represents bipolar sigmoid activation function with
|
|
the next expression:
|
|
<code lang="none">
|
|
2
|
|
f(x) = ------------------ - 1
|
|
1 + exp(-alpha * x)
|
|
|
|
2 * alpha * exp(-alpha * x )
|
|
f'(x) = -------------------------------- = alpha * (1 - f(x)^2) / 2
|
|
(1 + exp(-alpha * x))^2
|
|
</code>
|
|
</para>
|
|
|
|
<para>Output range of the function: <b>[-1, 1]</b>.</para>
|
|
|
|
<para>Functions graph:</para>
|
|
<img src="img/neuro/sigmoid_bipolar.bmp" width="242" height="172" />
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="P:AForge.Neuro.BipolarSigmoidFunction.Alpha">
|
|
<summary>
|
|
Sigmoid's alpha value.
|
|
</summary>
|
|
|
|
<remarks><para>The value determines steepness of the function. Increasing value of
|
|
this property changes sigmoid to look more like a threshold function. Decreasing
|
|
value of this property makes sigmoid to be very smooth (slowly growing from its
|
|
minimum value to its maximum value).</para>
|
|
|
|
<para>Default value is set to <b>2</b>.</para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.BipolarSigmoidFunction.#ctor">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.SigmoidFunction"/> class.
|
|
</summary>
|
|
</member>
|
|
<member name="M:AForge.Neuro.BipolarSigmoidFunction.#ctor(System.Double)">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.BipolarSigmoidFunction"/> class.
|
|
</summary>
|
|
|
|
<param name="alpha">Sigmoid's alpha value.</param>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.BipolarSigmoidFunction.Function(System.Double)">
|
|
<summary>
|
|
Calculates function value.
|
|
</summary>
|
|
|
|
<param name="x">Function input value.</param>
|
|
|
|
<returns>Function output value, <i>f(x)</i>.</returns>
|
|
|
|
<remarks>The method calculates function value at point <paramref name="x"/>.</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.BipolarSigmoidFunction.Derivative(System.Double)">
|
|
<summary>
|
|
Calculates function derivative.
|
|
</summary>
|
|
|
|
<param name="x">Function input value.</param>
|
|
|
|
<returns>Function derivative, <i>f'(x)</i>.</returns>
|
|
|
|
<remarks>The method calculates function derivative at point <paramref name="x"/>.</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.BipolarSigmoidFunction.Derivative2(System.Double)">
|
|
<summary>
|
|
Calculates function derivative.
|
|
</summary>
|
|
|
|
<param name="y">Function output value - the value, which was obtained
|
|
with the help of <see cref="M:AForge.Neuro.BipolarSigmoidFunction.Function(System.Double)"/> method.</param>
|
|
|
|
<returns>Function derivative, <i>f'(x)</i>.</returns>
|
|
|
|
<remarks><para>The method calculates the same derivative value as the
|
|
<see cref="M:AForge.Neuro.BipolarSigmoidFunction.Derivative(System.Double)"/> method, but it takes not the input <b>x</b> value
|
|
itself, but the function value, which was calculated previously with
|
|
the help of <see cref="M:AForge.Neuro.BipolarSigmoidFunction.Function(System.Double)"/> method.</para>
|
|
|
|
<para><note>Some applications require as function value, as derivative value,
|
|
so they can save the amount of calculations using this method to calculate derivative.</note></para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.BipolarSigmoidFunction.Clone">
|
|
<summary>
|
|
Creates a new object that is a copy of the current instance.
|
|
</summary>
|
|
|
|
<returns>
|
|
A new object that is a copy of this instance.
|
|
</returns>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.IActivationFunction">
|
|
<summary>
|
|
Activation function interface.
|
|
</summary>
|
|
|
|
<remarks>All activation functions, which are supposed to be used with
|
|
neurons, which calculate their output as a function of weighted sum of
|
|
their inputs, should implement this interfaces.
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.IActivationFunction.Function(System.Double)">
|
|
<summary>
|
|
Calculates function value.
|
|
</summary>
|
|
|
|
<param name="x">Function input value.</param>
|
|
|
|
<returns>Function output value, <i>f(x)</i>.</returns>
|
|
|
|
<remarks>The method calculates function value at point <paramref name="x"/>.</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.IActivationFunction.Derivative(System.Double)">
|
|
<summary>
|
|
Calculates function derivative.
|
|
</summary>
|
|
|
|
<param name="x">Function input value.</param>
|
|
|
|
<returns>Function derivative, <i>f'(x)</i>.</returns>
|
|
|
|
<remarks>The method calculates function derivative at point <paramref name="x"/>.</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.IActivationFunction.Derivative2(System.Double)">
|
|
<summary>
|
|
Calculates function derivative.
|
|
</summary>
|
|
|
|
<param name="y">Function output value - the value, which was obtained
|
|
with the help of <see cref="M:AForge.Neuro.IActivationFunction.Function(System.Double)"/> method.</param>
|
|
|
|
<returns>Function derivative, <i>f'(x)</i>.</returns>
|
|
|
|
<remarks><para>The method calculates the same derivative value as the
|
|
<see cref="M:AForge.Neuro.IActivationFunction.Derivative(System.Double)"/> method, but it takes not the input <b>x</b> value
|
|
itself, but the function value, which was calculated previously with
|
|
the help of <see cref="M:AForge.Neuro.IActivationFunction.Function(System.Double)"/> method.</para>
|
|
|
|
<para><note>Some applications require as function value, as derivative value,
|
|
so they can save the amount of calculations using this method to calculate derivative.</note></para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.SigmoidFunction">
|
|
<summary>
|
|
Sigmoid activation function.
|
|
</summary>
|
|
|
|
<remarks><para>The class represents sigmoid activation function with
|
|
the next expression:
|
|
<code lang="none">
|
|
1
|
|
f(x) = ------------------
|
|
1 + exp(-alpha * x)
|
|
|
|
alpha * exp(-alpha * x )
|
|
f'(x) = ---------------------------- = alpha * f(x) * (1 - f(x))
|
|
(1 + exp(-alpha * x))^2
|
|
</code>
|
|
</para>
|
|
|
|
<para>Output range of the function: <b>[0, 1]</b>.</para>
|
|
|
|
<para>Functions graph:</para>
|
|
<img src="img/neuro/sigmoid.bmp" width="242" height="172" />
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="P:AForge.Neuro.SigmoidFunction.Alpha">
|
|
<summary>
|
|
Sigmoid's alpha value.
|
|
</summary>
|
|
|
|
<remarks><para>The value determines steepness of the function. Increasing value of
|
|
this property changes sigmoid to look more like a threshold function. Decreasing
|
|
value of this property makes sigmoid to be very smooth (slowly growing from its
|
|
minimum value to its maximum value).</para>
|
|
|
|
<para>Default value is set to <b>2</b>.</para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.SigmoidFunction.#ctor">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.SigmoidFunction"/> class.
|
|
</summary>
|
|
</member>
|
|
<member name="M:AForge.Neuro.SigmoidFunction.#ctor(System.Double)">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.SigmoidFunction"/> class.
|
|
</summary>
|
|
|
|
<param name="alpha">Sigmoid's alpha value.</param>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.SigmoidFunction.Function(System.Double)">
|
|
<summary>
|
|
Calculates function value.
|
|
</summary>
|
|
|
|
<param name="x">Function input value.</param>
|
|
|
|
<returns>Function output value, <i>f(x)</i>.</returns>
|
|
|
|
<remarks>The method calculates function value at point <paramref name="x"/>.</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.SigmoidFunction.Derivative(System.Double)">
|
|
<summary>
|
|
Calculates function derivative.
|
|
</summary>
|
|
|
|
<param name="x">Function input value.</param>
|
|
|
|
<returns>Function derivative, <i>f'(x)</i>.</returns>
|
|
|
|
<remarks>The method calculates function derivative at point <paramref name="x"/>.</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.SigmoidFunction.Derivative2(System.Double)">
|
|
<summary>
|
|
Calculates function derivative.
|
|
</summary>
|
|
|
|
<param name="y">Function output value - the value, which was obtained
|
|
with the help of <see cref="M:AForge.Neuro.SigmoidFunction.Function(System.Double)"/> method.</param>
|
|
|
|
<returns>Function derivative, <i>f'(x)</i>.</returns>
|
|
|
|
<remarks><para>The method calculates the same derivative value as the
|
|
<see cref="M:AForge.Neuro.SigmoidFunction.Derivative(System.Double)"/> method, but it takes not the input <b>x</b> value
|
|
itself, but the function value, which was calculated previously with
|
|
the help of <see cref="M:AForge.Neuro.SigmoidFunction.Function(System.Double)"/> method.</para>
|
|
|
|
<para><note>Some applications require as function value, as derivative value,
|
|
so they can save the amount of calculations using this method to calculate derivative.</note></para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.SigmoidFunction.Clone">
|
|
<summary>
|
|
Creates a new object that is a copy of the current instance.
|
|
</summary>
|
|
|
|
<returns>
|
|
A new object that is a copy of this instance.
|
|
</returns>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.ThresholdFunction">
|
|
<summary>
|
|
Threshold activation function.
|
|
</summary>
|
|
|
|
<remarks><para>The class represents threshold activation function with
|
|
the next expression:
|
|
<code lang="none">
|
|
f(x) = 1, if x >= 0, otherwise 0
|
|
</code>
|
|
</para>
|
|
|
|
<para>Output range of the function: <b>[0, 1]</b>.</para>
|
|
|
|
<para>Functions graph:</para>
|
|
<img src="img/neuro/threshold.bmp" width="242" height="172" />
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.ThresholdFunction.#ctor">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.ThresholdFunction"/> class.
|
|
</summary>
|
|
</member>
|
|
<member name="M:AForge.Neuro.ThresholdFunction.Function(System.Double)">
|
|
<summary>
|
|
Calculates function value.
|
|
</summary>
|
|
|
|
<param name="x">Function input value.</param>
|
|
|
|
<returns>Function output value, <i>f(x)</i>.</returns>
|
|
|
|
<remarks>The method calculates function value at point <paramref name="x"/>.</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.ThresholdFunction.Derivative(System.Double)">
|
|
<summary>
|
|
Calculates function derivative (not supported).
|
|
</summary>
|
|
|
|
<param name="x">Input value.</param>
|
|
|
|
<returns>Always returns 0.</returns>
|
|
|
|
<remarks><para><note>The method is not supported, because it is not possible to
|
|
calculate derivative of the function.</note></para></remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.ThresholdFunction.Derivative2(System.Double)">
|
|
<summary>
|
|
Calculates function derivative (not supported).
|
|
</summary>
|
|
|
|
<param name="y">Input value.</param>
|
|
|
|
<returns>Always returns 0.</returns>
|
|
|
|
<remarks><para><note>The method is not supported, because it is not possible to
|
|
calculate derivative of the function.</note></para></remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.ThresholdFunction.Clone">
|
|
<summary>
|
|
Creates a new object that is a copy of the current instance.
|
|
</summary>
|
|
|
|
<returns>
|
|
A new object that is a copy of this instance.
|
|
</returns>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.ActivationLayer">
|
|
<summary>
|
|
Activation layer.
|
|
</summary>
|
|
|
|
<remarks>Activation layer is a layer of <see cref="T:AForge.Neuro.ActivationNeuron">activation neurons</see>.
|
|
The layer is usually used in multi-layer neural networks.</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.ActivationLayer.#ctor(System.Int32,System.Int32,AForge.Neuro.IActivationFunction)">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.ActivationLayer"/> class.
|
|
</summary>
|
|
|
|
<param name="neuronsCount">Layer's neurons count.</param>
|
|
<param name="inputsCount">Layer's inputs count.</param>
|
|
<param name="function">Activation function of neurons of the layer.</param>
|
|
|
|
<remarks>The new layer is randomized (see <see cref="M:AForge.Neuro.ActivationNeuron.Randomize"/>
|
|
method) after it is created.</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.ActivationLayer.SetActivationFunction(AForge.Neuro.IActivationFunction)">
|
|
<summary>
|
|
Set new activation function for all neurons of the layer.
|
|
</summary>
|
|
|
|
<param name="function">Activation function to set.</param>
|
|
|
|
<remarks><para>The methods sets new activation function for each neuron by setting
|
|
their <see cref="P:AForge.Neuro.ActivationNeuron.ActivationFunction"/> property.</para></remarks>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.DistanceLayer">
|
|
<summary>
|
|
Distance layer.
|
|
</summary>
|
|
|
|
<remarks>Distance layer is a layer of <see cref="T:AForge.Neuro.DistanceNeuron">distance neurons</see>.
|
|
The layer is usually a single layer of such networks as Kohonen Self
|
|
Organizing Map, Elastic Net, Hamming Memory Net.</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.DistanceLayer.#ctor(System.Int32,System.Int32)">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.DistanceLayer"/> class.
|
|
</summary>
|
|
|
|
<param name="neuronsCount">Layer's neurons count.</param>
|
|
<param name="inputsCount">Layer's inputs count.</param>
|
|
|
|
<remarks>The new layet is randomized (see <see cref="M:AForge.Neuro.Neuron.Randomize"/>
|
|
method) after it is created.</remarks>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.Layer">
|
|
<summary>
|
|
Base neural layer class.
|
|
</summary>
|
|
|
|
<remarks>This is a base neural layer class, which represents
|
|
collection of neurons.</remarks>
|
|
|
|
</member>
|
|
<member name="F:AForge.Neuro.Layer.inputsCount">
|
|
<summary>
|
|
Layer's inputs count.
|
|
</summary>
|
|
</member>
|
|
<member name="F:AForge.Neuro.Layer.neuronsCount">
|
|
<summary>
|
|
Layer's neurons count.
|
|
</summary>
|
|
</member>
|
|
<member name="F:AForge.Neuro.Layer.neurons">
|
|
<summary>
|
|
Layer's neurons.
|
|
</summary>
|
|
</member>
|
|
<member name="F:AForge.Neuro.Layer.output">
|
|
<summary>
|
|
Layer's output vector.
|
|
</summary>
|
|
</member>
|
|
<member name="P:AForge.Neuro.Layer.InputsCount">
|
|
<summary>
|
|
Layer's inputs count.
|
|
</summary>
|
|
</member>
|
|
<member name="P:AForge.Neuro.Layer.Neurons">
|
|
<summary>
|
|
Layer's neurons.
|
|
</summary>
|
|
|
|
</member>
|
|
<member name="P:AForge.Neuro.Layer.Output">
|
|
<summary>
|
|
Layer's output vector.
|
|
</summary>
|
|
|
|
<remarks><para>The calculation way of layer's output vector is determined by neurons,
|
|
which comprise the layer.</para>
|
|
|
|
<para><note>The property is not initialized (equals to <see langword="null"/>) until
|
|
<see cref="M:AForge.Neuro.Layer.Compute(System.Double[])"/> method is called.</note></para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Layer.#ctor(System.Int32,System.Int32)">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.Layer"/> class.
|
|
</summary>
|
|
|
|
<param name="neuronsCount">Layer's neurons count.</param>
|
|
<param name="inputsCount">Layer's inputs count.</param>
|
|
|
|
<remarks>Protected contructor, which initializes <see cref="F:AForge.Neuro.Layer.inputsCount"/>,
|
|
<see cref="F:AForge.Neuro.Layer.neuronsCount"/> and <see cref="F:AForge.Neuro.Layer.neurons"/> members.</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Layer.Compute(System.Double[])">
|
|
<summary>
|
|
Compute output vector of the layer.
|
|
</summary>
|
|
|
|
<param name="input">Input vector.</param>
|
|
|
|
<returns>Returns layer's output vector.</returns>
|
|
|
|
<remarks><para>The actual layer's output vector is determined by neurons,
|
|
which comprise the layer - consists of output values of layer's neurons.
|
|
The output vector is also stored in <see cref="P:AForge.Neuro.Layer.Output"/> property.</para>
|
|
|
|
<para><note>The method may be called safely from multiple threads to compute layer's
|
|
output value for the specified input values. However, the value of
|
|
<see cref="P:AForge.Neuro.Layer.Output"/> property in multi-threaded environment is not predictable,
|
|
since it may hold layer's output computed from any of the caller threads. Multi-threaded
|
|
access to the method is useful in those cases when it is required to improve performance
|
|
by utilizing several threads and the computation is based on the immediate return value
|
|
of the method, but not on layer's output property.</note></para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Layer.Randomize">
|
|
<summary>
|
|
Randomize neurons of the layer.
|
|
</summary>
|
|
|
|
<remarks>Randomizes layer's neurons by calling <see cref="M:AForge.Neuro.Neuron.Randomize"/> method
|
|
of each neuron.</remarks>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.Learning.BackPropagationLearning">
|
|
<summary>
|
|
Back propagation learning algorithm.
|
|
</summary>
|
|
|
|
<remarks><para>The class implements back propagation learning algorithm,
|
|
which is widely used for training multi-layer neural networks with
|
|
continuous activation functions.</para>
|
|
|
|
<para>Sample usage (training network to calculate XOR function):</para>
|
|
<code>
|
|
// initialize input and output values
|
|
double[][] input = new double[4][] {
|
|
new double[] {0, 0}, new double[] {0, 1},
|
|
new double[] {1, 0}, new double[] {1, 1}
|
|
};
|
|
double[][] output = new double[4][] {
|
|
new double[] {0}, new double[] {1},
|
|
new double[] {1}, new double[] {0}
|
|
};
|
|
// create neural network
|
|
ActivationNetwork network = new ActivationNetwork(
|
|
SigmoidFunction( 2 ),
|
|
2, // two inputs in the network
|
|
2, // two neurons in the first layer
|
|
1 ); // one neuron in the second layer
|
|
// create teacher
|
|
BackPropagationLearning teacher = new BackPropagationLearning( network );
|
|
// loop
|
|
while ( !needToStop )
|
|
{
|
|
// run epoch of learning procedure
|
|
double error = teacher.RunEpoch( input, output );
|
|
// check error value to see if we need to stop
|
|
// ...
|
|
}
|
|
</code>
|
|
</remarks>
|
|
|
|
<seealso cref="T:AForge.Neuro.Learning.EvolutionaryLearning"/>
|
|
|
|
</member>
|
|
<member name="P:AForge.Neuro.Learning.BackPropagationLearning.LearningRate">
|
|
<summary>
|
|
Learning rate, [0, 1].
|
|
</summary>
|
|
|
|
<remarks><para>The value determines speed of learning.</para>
|
|
|
|
<para>Default value equals to <b>0.1</b>.</para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="P:AForge.Neuro.Learning.BackPropagationLearning.Momentum">
|
|
<summary>
|
|
Momentum, [0, 1].
|
|
</summary>
|
|
|
|
<remarks><para>The value determines the portion of previous weight's update
|
|
to use on current iteration. Weight's update values are calculated on
|
|
each iteration depending on neuron's error. The momentum specifies the amount
|
|
of update to use from previous iteration and the amount of update
|
|
to use from current iteration. If the value is equal to 0.1, for example,
|
|
then 0.1 portion of previous update and 0.9 portion of current update are used
|
|
to update weight's value.</para>
|
|
|
|
<para>Default value equals to <b>0.0</b>.</para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.BackPropagationLearning.#ctor(AForge.Neuro.ActivationNetwork)">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.BackPropagationLearning"/> class.
|
|
</summary>
|
|
|
|
<param name="network">Network to teach.</param>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.BackPropagationLearning.Run(System.Double[],System.Double[])">
|
|
<summary>
|
|
Runs learning iteration.
|
|
</summary>
|
|
|
|
<param name="input">Input vector.</param>
|
|
<param name="output">Desired output vector.</param>
|
|
|
|
<returns>Returns squared error (difference between current network's output and
|
|
desired output) divided by 2.</returns>
|
|
|
|
<remarks><para>Runs one learning iteration and updates neuron's
|
|
weights.</para></remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.BackPropagationLearning.RunEpoch(System.Double[][],System.Double[][])">
|
|
<summary>
|
|
Runs learning epoch.
|
|
</summary>
|
|
|
|
<param name="input">Array of input vectors.</param>
|
|
<param name="output">Array of output vectors.</param>
|
|
|
|
<returns>Returns summary learning error for the epoch. See <see cref="M:AForge.Neuro.Learning.BackPropagationLearning.Run(System.Double[],System.Double[])"/>
|
|
method for details about learning error calculation.</returns>
|
|
|
|
<remarks><para>The method runs one learning epoch, by calling <see cref="M:AForge.Neuro.Learning.BackPropagationLearning.Run(System.Double[],System.Double[])"/> method
|
|
for each vector provided in the <paramref name="input"/> array.</para></remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.BackPropagationLearning.CalculateError(System.Double[])">
|
|
<summary>
|
|
Calculates error values for all neurons of the network.
|
|
</summary>
|
|
|
|
<param name="desiredOutput">Desired output vector.</param>
|
|
|
|
<returns>Returns summary squared error of the last layer divided by 2.</returns>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.BackPropagationLearning.CalculateUpdates(System.Double[])">
|
|
<summary>
|
|
Calculate weights updates.
|
|
</summary>
|
|
|
|
<param name="input">Network's input vector.</param>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.BackPropagationLearning.UpdateNetwork">
|
|
<summary>
|
|
Update network'sweights.
|
|
</summary>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.Learning.DeltaRuleLearning">
|
|
<summary>
|
|
Delta rule learning algorithm.
|
|
</summary>
|
|
|
|
<remarks><para>This learning algorithm is used to train one layer neural
|
|
network of <see cref="T:AForge.Neuro.ActivationNeuron">Activation Neurons</see>
|
|
with continuous activation function, see <see cref="T:AForge.Neuro.SigmoidFunction"/>
|
|
for example.</para>
|
|
|
|
<para>See information about <a href="http://en.wikipedia.org/wiki/Delta_rule">delta rule</a>
|
|
learning algorithm.</para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="P:AForge.Neuro.Learning.DeltaRuleLearning.LearningRate">
|
|
<summary>
|
|
Learning rate, [0, 1].
|
|
</summary>
|
|
|
|
<remarks><para>The value determines speed of learning.</para>
|
|
|
|
<para>Default value equals to <b>0.1</b>.</para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.DeltaRuleLearning.#ctor(AForge.Neuro.ActivationNetwork)">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.DeltaRuleLearning"/> class.
|
|
</summary>
|
|
|
|
<param name="network">Network to teach.</param>
|
|
|
|
<exception cref="T:System.ArgumentException">Invalid nuaral network. It should have one layer only.</exception>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.DeltaRuleLearning.Run(System.Double[],System.Double[])">
|
|
<summary>
|
|
Runs learning iteration.
|
|
</summary>
|
|
|
|
<param name="input">Input vector.</param>
|
|
<param name="output">Desired output vector.</param>
|
|
|
|
<returns>Returns squared error (difference between current network's output and
|
|
desired output) divided by 2.</returns>
|
|
|
|
<remarks><para>Runs one learning iteration and updates neuron's
|
|
weights.</para></remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.DeltaRuleLearning.RunEpoch(System.Double[][],System.Double[][])">
|
|
<summary>
|
|
Runs learning epoch.
|
|
</summary>
|
|
|
|
<param name="input">Array of input vectors.</param>
|
|
<param name="output">Array of output vectors.</param>
|
|
|
|
<returns>Returns summary learning error for the epoch. See <see cref="M:AForge.Neuro.Learning.DeltaRuleLearning.Run(System.Double[],System.Double[])"/>
|
|
method for details about learning error calculation.</returns>
|
|
|
|
<remarks><para>The method runs one learning epoch, by calling <see cref="M:AForge.Neuro.Learning.DeltaRuleLearning.Run(System.Double[],System.Double[])"/> method
|
|
for each vector provided in the <paramref name="input"/> array.</para></remarks>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.Learning.ElasticNetworkLearning">
|
|
<summary>
|
|
Elastic network learning algorithm.
|
|
</summary>
|
|
|
|
<remarks><para>This class implements elastic network's learning algorithm and
|
|
allows to train <see cref="T:AForge.Neuro.DistanceNetwork">Distance Networks</see>.</para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="P:AForge.Neuro.Learning.ElasticNetworkLearning.LearningRate">
|
|
<summary>
|
|
Learning rate, [0, 1].
|
|
</summary>
|
|
|
|
<remarks><para>Determines speed of learning.</para>
|
|
|
|
<para>Default value equals to <b>0.1</b>.</para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="P:AForge.Neuro.Learning.ElasticNetworkLearning.LearningRadius">
|
|
<summary>
|
|
Learning radius, [0, 1].
|
|
</summary>
|
|
|
|
<remarks><para>Determines the amount of neurons to be updated around
|
|
winner neuron. Neurons, which are in the circle of specified radius,
|
|
are updated during the learning procedure. Neurons, which are closer
|
|
to the winner neuron, get more update.</para>
|
|
|
|
<para>Default value equals to <b>0.5</b>.</para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.ElasticNetworkLearning.#ctor(AForge.Neuro.DistanceNetwork)">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.ElasticNetworkLearning"/> class.
|
|
</summary>
|
|
|
|
<param name="network">Neural network to train.</param>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.ElasticNetworkLearning.Run(System.Double[])">
|
|
<summary>
|
|
Runs learning iteration.
|
|
</summary>
|
|
|
|
<param name="input">Input vector.</param>
|
|
|
|
<returns>Returns learning error - summary absolute difference between neurons'
|
|
weights and appropriate inputs. The difference is measured according to the neurons
|
|
distance to the winner neuron.</returns>
|
|
|
|
<remarks><para>The method runs one learning iterations - finds winner neuron (the neuron
|
|
which has weights with values closest to the specified input vector) and updates its weight
|
|
(as well as weights of neighbor neurons) in the way to decrease difference with the specified
|
|
input vector.</para></remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.ElasticNetworkLearning.RunEpoch(System.Double[][])">
|
|
<summary>
|
|
Runs learning epoch.
|
|
</summary>
|
|
|
|
<param name="input">Array of input vectors.</param>
|
|
|
|
<returns>Returns summary learning error for the epoch. See <see cref="M:AForge.Neuro.Learning.ElasticNetworkLearning.Run(System.Double[])"/>
|
|
method for details about learning error calculation.</returns>
|
|
|
|
<remarks><para>The method runs one learning epoch, by calling <see cref="M:AForge.Neuro.Learning.ElasticNetworkLearning.Run(System.Double[])"/> method
|
|
for each vector provided in the <paramref name="input"/> array.</para></remarks>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.Learning.EvolutionaryFitness">
|
|
<summary>
|
|
Fitness function used for chromosomes representing collection of neural network's weights.
|
|
</summary>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.EvolutionaryFitness.#ctor(AForge.Neuro.ActivationNetwork,System.Double[][],System.Double[][])">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.EvolutionaryFitness"/> class.
|
|
</summary>
|
|
|
|
<param name="network">Neural network for which fitness will be calculated.</param>
|
|
<param name="input">Input data samples for neural network.</param>
|
|
<param name="output">Output data sampels for neural network (desired output).</param>
|
|
|
|
<exception cref="T:System.ArgumentException">Length of inputs and outputs arrays must be equal and greater than 0.</exception>
|
|
<exception cref="T:System.ArgumentException">Length of each input vector must be equal to neural network's inputs count.</exception>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.EvolutionaryFitness.Evaluate(AForge.Genetic.IChromosome)">
|
|
<summary>
|
|
Evaluates chromosome.
|
|
</summary>
|
|
|
|
<param name="chromosome">Chromosome to evaluate.</param>
|
|
|
|
<returns>Returns chromosome's fitness value.</returns>
|
|
|
|
<remarks>The method calculates fitness value of the specified
|
|
chromosome.</remarks>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.Learning.EvolutionaryLearning">
|
|
<summary>
|
|
Neural networks' evolutionary learning algorithm, which is based on Genetic Algorithms.
|
|
</summary>
|
|
|
|
<remarks><para>The class implements supervised neural network's learning algorithm,
|
|
which is based on Genetic Algorithms. For the given neural network, it create a population
|
|
of <see cref="T:AForge.Genetic.DoubleArrayChromosome"/> chromosomes, which represent neural network's
|
|
weights. Then, during the learning process, the genetic population evolves and weights, which
|
|
are represented by the best chromosome, are set to the source neural network.</para>
|
|
|
|
<para>See <see cref="T:AForge.Genetic.Population"/> class for additional information about genetic population
|
|
and evolutionary based search.</para>
|
|
|
|
<para>Sample usage (training network to calculate XOR function):</para>
|
|
<code>
|
|
// initialize input and output values
|
|
double[][] input = new double[4][] {
|
|
new double[] {-1, 1}, new double[] {-1, 1},
|
|
new double[] { 1, -1}, new double[] { 1, 1}
|
|
};
|
|
double[][] output = new double[4][] {
|
|
new double[] {-1}, new double[] { 1},
|
|
new double[] { 1}, new double[] {-1}
|
|
};
|
|
// create neural network
|
|
ActivationNetwork network = new ActivationNetwork(
|
|
BipolarSigmoidFunction( 2 ),
|
|
2, // two inputs in the network
|
|
2, // two neurons in the first layer
|
|
1 ); // one neuron in the second layer
|
|
// create teacher
|
|
EvolutionaryLearning teacher = new EvolutionaryLearning( network,
|
|
100 ); // number of chromosomes in genetic population
|
|
// loop
|
|
while ( !needToStop )
|
|
{
|
|
// run epoch of learning procedure
|
|
double error = teacher.RunEpoch( input, output );
|
|
// check error value to see if we need to stop
|
|
// ...
|
|
}
|
|
|
|
</code>
|
|
</remarks>
|
|
|
|
<seealso cref="T:AForge.Neuro.Learning.BackPropagationLearning"/>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.EvolutionaryLearning.#ctor(AForge.Neuro.ActivationNetwork,System.Int32,AForge.Math.Random.IRandomNumberGenerator,AForge.Math.Random.IRandomNumberGenerator,AForge.Math.Random.IRandomNumberGenerator,AForge.Genetic.ISelectionMethod,System.Double,System.Double,System.Double)">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.EvolutionaryLearning"/> class.
|
|
</summary>
|
|
|
|
<param name="activationNetwork">Activation network to be trained.</param>
|
|
<param name="populationSize">Size of genetic population.</param>
|
|
<param name="chromosomeGenerator">Random numbers generator used for initialization of genetic
|
|
population representing neural network's weights and thresholds (see <see cref="F:AForge.Genetic.DoubleArrayChromosome.chromosomeGenerator"/>).</param>
|
|
<param name="mutationMultiplierGenerator">Random numbers generator used to generate random
|
|
factors for multiplication of network's weights and thresholds during genetic mutation
|
|
(ses <see cref="F:AForge.Genetic.DoubleArrayChromosome.mutationMultiplierGenerator"/>.)</param>
|
|
<param name="mutationAdditionGenerator">Random numbers generator used to generate random
|
|
values added to neural network's weights and thresholds during genetic mutation
|
|
(see <see cref="F:AForge.Genetic.DoubleArrayChromosome.mutationAdditionGenerator"/>).</param>
|
|
<param name="selectionMethod">Method of selection best chromosomes in genetic population.</param>
|
|
<param name="crossOverRate">Crossover rate in genetic population (see
|
|
<see cref="P:AForge.Genetic.Population.CrossoverRate"/>).</param>
|
|
<param name="mutationRate">Mutation rate in genetic population (see
|
|
<see cref="P:AForge.Genetic.Population.MutationRate"/>).</param>
|
|
<param name="randomSelectionRate">Rate of injection of random chromosomes during selection
|
|
in genetic population (see <see cref="P:AForge.Genetic.Population.RandomSelectionPortion"/>).</param>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.EvolutionaryLearning.#ctor(AForge.Neuro.ActivationNetwork,System.Int32)">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.EvolutionaryLearning"/> class.
|
|
</summary>
|
|
|
|
<param name="activationNetwork">Activation network to be trained.</param>
|
|
<param name="populationSize">Size of genetic population.</param>
|
|
|
|
<remarks><para>This version of constructor is used to create genetic population
|
|
for searching optimal neural network's weight using default set of parameters, which are:
|
|
<list type="bullet">
|
|
<item>Selection method - elite;</item>
|
|
<item>Crossover rate - 0.75;</item>
|
|
<item>Mutation rate - 0.25;</item>
|
|
<item>Rate of injection of random chromosomes during selection - 0.20;</item>
|
|
<item>Random numbers generator for initializing new chromosome -
|
|
<c>UniformGenerator( new Range( -1, 1 ) )</c>;</item>
|
|
<item>Random numbers generator used during mutation for genes' multiplication -
|
|
<c>ExponentialGenerator( 1 )</c>;</item>
|
|
<item>Random numbers generator used during mutation for adding random value to genes -
|
|
<c>UniformGenerator( new Range( -0.5f, 0.5f ) )</c>.</item>
|
|
</list></para>
|
|
|
|
<para>In order to have full control over the above default parameters, it is possible to
|
|
used extended version of constructor, which allows to specify all of the parameters.</para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.EvolutionaryLearning.Run(System.Double[],System.Double[])">
|
|
<summary>
|
|
Runs learning iteration.
|
|
</summary>
|
|
|
|
<param name="input">Input vector.</param>
|
|
<param name="output">Desired output vector.</param>
|
|
|
|
<returns>Returns learning error.</returns>
|
|
|
|
<remarks><note>The method is not implemented, since evolutionary learning algorithm is global
|
|
and requires all inputs/outputs in order to run its one epoch. Use <see cref="M:AForge.Neuro.Learning.EvolutionaryLearning.RunEpoch(System.Double[][],System.Double[][])"/>
|
|
method instead.</note></remarks>
|
|
|
|
<exception cref="T:System.NotImplementedException">The method is not implemented by design.</exception>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.EvolutionaryLearning.RunEpoch(System.Double[][],System.Double[][])">
|
|
<summary>
|
|
Runs learning epoch.
|
|
</summary>
|
|
|
|
<param name="input">Array of input vectors.</param>
|
|
<param name="output">Array of output vectors.</param>
|
|
|
|
<returns>Returns summary squared learning error for the entire epoch.</returns>
|
|
|
|
<remarks><para><note>While running the neural network's learning process, it is required to
|
|
pass the same <paramref name="input"/> and <paramref name="output"/> values for each
|
|
epoch. On the very first run of the method it will initialize evolutionary fitness
|
|
function with the given input/output. So, changing input/output in middle of the learning
|
|
process, will break it.</note></para></remarks>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.Learning.ISupervisedLearning">
|
|
<summary>
|
|
Supervised learning interface.
|
|
</summary>
|
|
|
|
<remarks><para>The interface describes methods, which should be implemented
|
|
by all supervised learning algorithms. Supervised learning is such
|
|
type of learning algorithms, where system's desired output is known on
|
|
the learning stage. So, given sample input values and desired outputs,
|
|
system should adopt its internals to produce correct (or close to correct)
|
|
result after the learning step is complete.</para></remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.ISupervisedLearning.Run(System.Double[],System.Double[])">
|
|
<summary>
|
|
Runs learning iteration.
|
|
</summary>
|
|
|
|
<param name="input">Input vector.</param>
|
|
<param name="output">Desired output vector.</param>
|
|
|
|
<returns>Returns learning error.</returns>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.ISupervisedLearning.RunEpoch(System.Double[][],System.Double[][])">
|
|
<summary>
|
|
Runs learning epoch.
|
|
</summary>
|
|
|
|
<param name="input">Array of input vectors.</param>
|
|
<param name="output">Array of output vectors.</param>
|
|
|
|
<returns>Returns sum of learning errors.</returns>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.Learning.IUnsupervisedLearning">
|
|
<summary>
|
|
Unsupervised learning interface.
|
|
</summary>
|
|
|
|
<remarks><para>The interface describes methods, which should be implemented
|
|
by all unsupervised learning algorithms. Unsupervised learning is such
|
|
type of learning algorithms, where system's desired output is not known on
|
|
the learning stage. Given sample input values, it is expected, that
|
|
system will organize itself in the way to find similarities betweed provided
|
|
samples.</para></remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.IUnsupervisedLearning.Run(System.Double[])">
|
|
<summary>
|
|
Runs learning iteration.
|
|
</summary>
|
|
|
|
<param name="input">Input vector.</param>
|
|
|
|
<returns>Returns learning error.</returns>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.IUnsupervisedLearning.RunEpoch(System.Double[][])">
|
|
<summary>
|
|
Runs learning epoch.
|
|
</summary>
|
|
|
|
<param name="input">Array of input vectors.</param>
|
|
|
|
<returns>Returns sum of learning errors.</returns>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.Learning.PerceptronLearning">
|
|
<summary>
|
|
Perceptron learning algorithm.
|
|
</summary>
|
|
|
|
<remarks><para>This learning algorithm is used to train one layer neural
|
|
network of <see cref="T:AForge.Neuro.ActivationNeuron">Activation Neurons</see>
|
|
with the <see cref="T:AForge.Neuro.ThresholdFunction">Threshold</see>
|
|
activation function.</para>
|
|
|
|
<para>See information about <a href="http://en.wikipedia.org/wiki/Perceptron">Perceptron</a>
|
|
and its learning algorithm.</para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="P:AForge.Neuro.Learning.PerceptronLearning.LearningRate">
|
|
<summary>
|
|
Learning rate, [0, 1].
|
|
</summary>
|
|
|
|
<remarks><para>The value determines speed of learning.</para>
|
|
|
|
<para>Default value equals to <b>0.1</b>.</para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.PerceptronLearning.#ctor(AForge.Neuro.ActivationNetwork)">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.PerceptronLearning"/> class.
|
|
</summary>
|
|
|
|
<param name="network">Network to teach.</param>
|
|
|
|
<exception cref="T:System.ArgumentException">Invalid nuaral network. It should have one layer only.</exception>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.PerceptronLearning.Run(System.Double[],System.Double[])">
|
|
<summary>
|
|
Runs learning iteration.
|
|
</summary>
|
|
|
|
<param name="input">Input vector.</param>
|
|
<param name="output">Desired output vector.</param>
|
|
|
|
<returns>Returns absolute error - difference between current network's output and
|
|
desired output.</returns>
|
|
|
|
<remarks><para>Runs one learning iteration and updates neuron's
|
|
weights in the case if neuron's output is not equal to the
|
|
desired output.</para></remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.PerceptronLearning.RunEpoch(System.Double[][],System.Double[][])">
|
|
<summary>
|
|
Runs learning epoch.
|
|
</summary>
|
|
|
|
<param name="input">Array of input vectors.</param>
|
|
<param name="output">Array of output vectors.</param>
|
|
|
|
<returns>Returns summary learning error for the epoch. See <see cref="M:AForge.Neuro.Learning.PerceptronLearning.Run(System.Double[],System.Double[])"/>
|
|
method for details about learning error calculation.</returns>
|
|
|
|
<remarks><para>The method runs one learning epoch, by calling <see cref="M:AForge.Neuro.Learning.PerceptronLearning.Run(System.Double[],System.Double[])"/> method
|
|
for each vector provided in the <paramref name="input"/> array.</para></remarks>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.Learning.ResilientBackpropagationLearning">
|
|
<summary>
|
|
Resilient Backpropagation learning algorithm.
|
|
</summary>
|
|
|
|
<remarks><para>This class implements the resilient backpropagation (RProp)
|
|
learning algorithm. The RProp learning algorithm is one of the fastest learning
|
|
algorithms for feed-forward learning networks which use only first-order
|
|
information.</para>
|
|
|
|
<para>Sample usage (training network to calculate XOR function):</para>
|
|
<code>
|
|
// initialize input and output values
|
|
double[][] input = new double[4][] {
|
|
new double[] {0, 0}, new double[] {0, 1},
|
|
new double[] {1, 0}, new double[] {1, 1}
|
|
};
|
|
double[][] output = new double[4][] {
|
|
new double[] {0}, new double[] {1},
|
|
new double[] {1}, new double[] {0}
|
|
};
|
|
// create neural network
|
|
ActivationNetwork network = new ActivationNetwork(
|
|
SigmoidFunction( 2 ),
|
|
2, // two inputs in the network
|
|
2, // two neurons in the first layer
|
|
1 ); // one neuron in the second layer
|
|
// create teacher
|
|
ResilientBackpropagationLearning teacher = new ResilientBackpropagationLearning( network );
|
|
// loop
|
|
while ( !needToStop )
|
|
{
|
|
// run epoch of learning procedure
|
|
double error = teacher.RunEpoch( input, output );
|
|
// check error value to see if we need to stop
|
|
// ...
|
|
}
|
|
</code>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="P:AForge.Neuro.Learning.ResilientBackpropagationLearning.LearningRate">
|
|
<summary>
|
|
Learning rate.
|
|
</summary>
|
|
|
|
<remarks><para>The value determines speed of learning.</para>
|
|
|
|
<para>Default value equals to <b>0.0125</b>.</para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.#ctor(AForge.Neuro.ActivationNetwork)">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.ResilientBackpropagationLearning"/> class.
|
|
</summary>
|
|
|
|
<param name="network">Network to teach.</param>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.Run(System.Double[],System.Double[])">
|
|
<summary>
|
|
Runs learning iteration.
|
|
</summary>
|
|
|
|
<param name="input">Input vector.</param>
|
|
<param name="output">Desired output vector.</param>
|
|
|
|
<returns>Returns squared error (difference between current network's output and
|
|
desired output) divided by 2.</returns>
|
|
|
|
<remarks><para>Runs one learning iteration and updates neuron's
|
|
weights.</para></remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.RunEpoch(System.Double[][],System.Double[][])">
|
|
<summary>
|
|
Runs learning epoch.
|
|
</summary>
|
|
|
|
<param name="input">Array of input vectors.</param>
|
|
<param name="output">Array of output vectors.</param>
|
|
|
|
<returns>Returns summary learning error for the epoch. See <see cref="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.Run(System.Double[],System.Double[])"/>
|
|
method for details about learning error calculation.</returns>
|
|
|
|
<remarks><para>The method runs one learning epoch, by calling <see cref="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.Run(System.Double[],System.Double[])"/> method
|
|
for each vector provided in the <paramref name="input"/> array.</para></remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.ResetGradient">
|
|
<summary>
|
|
Resets current weight and threshold derivatives.
|
|
</summary>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.ResetUpdates(System.Double)">
|
|
<summary>
|
|
Resets the current update steps using the given learning rate.
|
|
</summary>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.UpdateNetwork">
|
|
<summary>
|
|
Update network's weights.
|
|
</summary>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.CalculateError(System.Double[])">
|
|
<summary>
|
|
Calculates error values for all neurons of the network.
|
|
</summary>
|
|
|
|
<param name="desiredOutput">Desired output vector.</param>
|
|
|
|
<returns>Returns summary squared error of the last layer divided by 2.</returns>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.CalculateGradient(System.Double[])">
|
|
<summary>
|
|
Calculate weights updates
|
|
</summary>
|
|
|
|
<param name="input">Network's input vector.</param>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.Learning.SOMLearning">
|
|
<summary>
|
|
Kohonen Self Organizing Map (SOM) learning algorithm.
|
|
</summary>
|
|
|
|
<remarks><para>This class implements Kohonen's SOM learning algorithm and
|
|
is widely used in clusterization tasks. The class allows to train
|
|
<see cref="T:AForge.Neuro.DistanceNetwork">Distance Networks</see>.</para>
|
|
|
|
<para>Sample usage (clustering RGB colors):</para>
|
|
<code>
|
|
// set range for randomization neurons' weights
|
|
Neuron.RandRange = new Range( 0, 255 );
|
|
// create network
|
|
DistanceNetwork network = new DistanceNetwork(
|
|
3, // thress inputs in the network
|
|
100 * 100 ); // 10000 neurons
|
|
// create learning algorithm
|
|
SOMLearning trainer = new SOMLearning( network );
|
|
// network's input
|
|
double[] input = new double[3];
|
|
// loop
|
|
while ( !needToStop )
|
|
{
|
|
input[0] = rand.Next( 256 );
|
|
input[1] = rand.Next( 256 );
|
|
input[2] = rand.Next( 256 );
|
|
|
|
trainer.Run( input );
|
|
|
|
// ...
|
|
// update learning rate and radius continuously,
|
|
// so networks may come steady state
|
|
}
|
|
</code>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="P:AForge.Neuro.Learning.SOMLearning.LearningRate">
|
|
<summary>
|
|
Learning rate, [0, 1].
|
|
</summary>
|
|
|
|
<remarks><para>Determines speed of learning.</para>
|
|
|
|
<para>Default value equals to <b>0.1</b>.</para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="P:AForge.Neuro.Learning.SOMLearning.LearningRadius">
|
|
<summary>
|
|
Learning radius.
|
|
</summary>
|
|
|
|
<remarks><para>Determines the amount of neurons to be updated around
|
|
winner neuron. Neurons, which are in the circle of specified radius,
|
|
are updated during the learning procedure. Neurons, which are closer
|
|
to the winner neuron, get more update.</para>
|
|
|
|
<para><note>In the case if learning rate is set to 0, then only winner
|
|
neuron's weights are updated.</note></para>
|
|
|
|
<para>Default value equals to <b>7</b>.</para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.SOMLearning.#ctor(AForge.Neuro.DistanceNetwork)">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.SOMLearning"/> class.
|
|
</summary>
|
|
|
|
<param name="network">Neural network to train.</param>
|
|
|
|
<remarks><para>This constructor supposes that a square network will be passed for training -
|
|
it should be possible to get square root of network's neurons amount.</para></remarks>
|
|
|
|
<exception cref="T:System.ArgumentException">Invalid network size - square network is expected.</exception>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.SOMLearning.#ctor(AForge.Neuro.DistanceNetwork,System.Int32,System.Int32)">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.SOMLearning"/> class.
|
|
</summary>
|
|
|
|
<param name="network">Neural network to train.</param>
|
|
<param name="width">Neural network's width.</param>
|
|
<param name="height">Neural network's height.</param>
|
|
|
|
<remarks>The constructor allows to pass network of arbitrary rectangular shape.
|
|
The amount of neurons in the network should be equal to <b>width</b> * <b>height</b>.
|
|
</remarks>
|
|
|
|
<exception cref="T:System.ArgumentException">Invalid network size - network size does not correspond
|
|
to specified width and height.</exception>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.SOMLearning.Run(System.Double[])">
|
|
<summary>
|
|
Runs learning iteration.
|
|
</summary>
|
|
|
|
<param name="input">Input vector.</param>
|
|
|
|
<returns>Returns learning error - summary absolute difference between neurons' weights
|
|
and appropriate inputs. The difference is measured according to the neurons
|
|
distance to the winner neuron.</returns>
|
|
|
|
<remarks><para>The method runs one learning iterations - finds winner neuron (the neuron
|
|
which has weights with values closest to the specified input vector) and updates its weight
|
|
(as well as weights of neighbor neurons) in the way to decrease difference with the specified
|
|
input vector.</para></remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Learning.SOMLearning.RunEpoch(System.Double[][])">
|
|
<summary>
|
|
Runs learning epoch.
|
|
</summary>
|
|
|
|
<param name="input">Array of input vectors.</param>
|
|
|
|
<returns>Returns summary learning error for the epoch. See <see cref="M:AForge.Neuro.Learning.SOMLearning.Run(System.Double[])"/>
|
|
method for details about learning error calculation.</returns>
|
|
|
|
<remarks><para>The method runs one learning epoch, by calling <see cref="M:AForge.Neuro.Learning.SOMLearning.Run(System.Double[])"/> method
|
|
for each vector provided in the <paramref name="input"/> array.</para></remarks>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.ActivationNetwork">
|
|
<summary>
|
|
Activation network.
|
|
</summary>
|
|
|
|
<remarks><para>Activation network is a base for multi-layer neural network
|
|
with activation functions. It consists of <see cref="T:AForge.Neuro.ActivationLayer">activation
|
|
layers</see>.</para>
|
|
|
|
<para>Sample usage:</para>
|
|
<code>
|
|
// create activation network
|
|
ActivationNetwork network = new ActivationNetwork(
|
|
new SigmoidFunction( ), // sigmoid activation function
|
|
3, // 3 inputs
|
|
4, 1 ); // 2 layers:
|
|
// 4 neurons in the firs layer
|
|
// 1 neuron in the second layer
|
|
</code>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.ActivationNetwork.#ctor(AForge.Neuro.IActivationFunction,System.Int32,System.Int32[])">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.ActivationNetwork"/> class.
|
|
</summary>
|
|
|
|
<param name="function">Activation function of neurons of the network.</param>
|
|
<param name="inputsCount">Network's inputs count.</param>
|
|
<param name="neuronsCount">Array, which specifies the amount of neurons in
|
|
each layer of the neural network.</param>
|
|
|
|
<remarks>The new network is randomized (see <see cref="M:AForge.Neuro.ActivationNeuron.Randomize"/>
|
|
method) after it is created.</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.ActivationNetwork.SetActivationFunction(AForge.Neuro.IActivationFunction)">
|
|
<summary>
|
|
Set new activation function for all neurons of the network.
|
|
</summary>
|
|
|
|
<param name="function">Activation function to set.</param>
|
|
|
|
<remarks><para>The method sets new activation function for all neurons by calling
|
|
<see cref="M:AForge.Neuro.ActivationLayer.SetActivationFunction(AForge.Neuro.IActivationFunction)"/> method for each layer of the network.</para></remarks>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.DistanceNetwork">
|
|
<summary>
|
|
Distance network.
|
|
</summary>
|
|
|
|
<remarks>Distance network is a neural network of only one <see cref="T:AForge.Neuro.DistanceLayer">distance
|
|
layer</see>. The network is a base for such neural networks as SOM, Elastic net, etc.
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.DistanceNetwork.#ctor(System.Int32,System.Int32)">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.DistanceNetwork"/> class.
|
|
</summary>
|
|
|
|
<param name="inputsCount">Network's inputs count.</param>
|
|
<param name="neuronsCount">Network's neurons count.</param>
|
|
|
|
<remarks>The new network is randomized (see <see cref="M:AForge.Neuro.Neuron.Randomize"/>
|
|
method) after it is created.</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.DistanceNetwork.GetWinner">
|
|
<summary>
|
|
Get winner neuron.
|
|
</summary>
|
|
|
|
<returns>Index of the winner neuron.</returns>
|
|
|
|
<remarks>The method returns index of the neuron, which weights have
|
|
the minimum distance from network's input.</remarks>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.Network">
|
|
<summary>
|
|
Base neural network class.
|
|
</summary>
|
|
|
|
<remarks>This is a base neural netwok class, which represents
|
|
collection of neuron's layers.</remarks>
|
|
|
|
</member>
|
|
<member name="F:AForge.Neuro.Network.inputsCount">
|
|
<summary>
|
|
Network's inputs count.
|
|
</summary>
|
|
</member>
|
|
<member name="F:AForge.Neuro.Network.layersCount">
|
|
<summary>
|
|
Network's layers count.
|
|
</summary>
|
|
</member>
|
|
<member name="F:AForge.Neuro.Network.layers">
|
|
<summary>
|
|
Network's layers.
|
|
</summary>
|
|
</member>
|
|
<member name="F:AForge.Neuro.Network.output">
|
|
<summary>
|
|
Network's output vector.
|
|
</summary>
|
|
</member>
|
|
<member name="P:AForge.Neuro.Network.InputsCount">
|
|
<summary>
|
|
Network's inputs count.
|
|
</summary>
|
|
</member>
|
|
<member name="P:AForge.Neuro.Network.Layers">
|
|
<summary>
|
|
Network's layers.
|
|
</summary>
|
|
</member>
|
|
<member name="P:AForge.Neuro.Network.Output">
|
|
<summary>
|
|
Network's output vector.
|
|
</summary>
|
|
|
|
<remarks><para>The calculation way of network's output vector is determined by
|
|
layers, which comprise the network.</para>
|
|
|
|
<para><note>The property is not initialized (equals to <see langword="null"/>) until
|
|
<see cref="M:AForge.Neuro.Network.Compute(System.Double[])"/> method is called.</note></para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Network.#ctor(System.Int32,System.Int32)">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.Network"/> class.
|
|
</summary>
|
|
|
|
<param name="inputsCount">Network's inputs count.</param>
|
|
<param name="layersCount">Network's layers count.</param>
|
|
|
|
<remarks>Protected constructor, which initializes <see cref="F:AForge.Neuro.Network.inputsCount"/>,
|
|
<see cref="F:AForge.Neuro.Network.layersCount"/> and <see cref="F:AForge.Neuro.Network.layers"/> members.</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Network.Compute(System.Double[])">
|
|
<summary>
|
|
Compute output vector of the network.
|
|
</summary>
|
|
|
|
<param name="input">Input vector.</param>
|
|
|
|
<returns>Returns network's output vector.</returns>
|
|
|
|
<remarks><para>The actual network's output vecor is determined by layers,
|
|
which comprise the layer - represents an output vector of the last layer
|
|
of the network. The output vector is also stored in <see cref="P:AForge.Neuro.Network.Output"/> property.</para>
|
|
|
|
<para><note>The method may be called safely from multiple threads to compute network's
|
|
output value for the specified input values. However, the value of
|
|
<see cref="P:AForge.Neuro.Network.Output"/> property in multi-threaded environment is not predictable,
|
|
since it may hold network's output computed from any of the caller threads. Multi-threaded
|
|
access to the method is useful in those cases when it is required to improve performance
|
|
by utilizing several threads and the computation is based on the immediate return value
|
|
of the method, but not on network's output property.</note></para>
|
|
</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Network.Randomize">
|
|
<summary>
|
|
Randomize layers of the network.
|
|
</summary>
|
|
|
|
<remarks>Randomizes network's layers by calling <see cref="M:AForge.Neuro.Layer.Randomize"/> method
|
|
of each layer.</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Network.Save(System.String)">
|
|
<summary>
|
|
Save network to specified file.
|
|
</summary>
|
|
|
|
<param name="fileName">File name to save network into.</param>
|
|
|
|
<remarks><para>The neural network is saved using .NET serialization (binary formatter is used).</para></remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Network.Save(System.IO.Stream)">
|
|
<summary>
|
|
Save network to specified file.
|
|
</summary>
|
|
|
|
<param name="stream">Stream to save network into.</param>
|
|
|
|
<remarks><para>The neural network is saved using .NET serialization (binary formatter is used).</para></remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Network.Load(System.String)">
|
|
<summary>
|
|
Load network from specified file.
|
|
</summary>
|
|
|
|
<param name="fileName">File name to load network from.</param>
|
|
|
|
<returns>Returns instance of <see cref="T:AForge.Neuro.Network"/> class with all properties initialized from file.</returns>
|
|
|
|
<remarks><para>Neural network is loaded from file using .NET serialization (binary formater is used).</para></remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Network.Load(System.IO.Stream)">
|
|
<summary>
|
|
Load network from specified file.
|
|
</summary>
|
|
|
|
<param name="stream">Stream to load network from.</param>
|
|
|
|
<returns>Returns instance of <see cref="T:AForge.Neuro.Network"/> class with all properties initialized from file.</returns>
|
|
|
|
<remarks><para>Neural network is loaded from file using .NET serialization (binary formater is used).</para></remarks>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.ActivationNeuron">
|
|
<summary>
|
|
Activation neuron.
|
|
</summary>
|
|
|
|
<remarks><para>Activation neuron computes weighted sum of its inputs, adds
|
|
threshold value and then applies <see cref="P:AForge.Neuro.ActivationNeuron.ActivationFunction">activation function</see>.
|
|
The neuron isusually used in multi-layer neural networks.</para></remarks>
|
|
|
|
<seealso cref="T:AForge.Neuro.IActivationFunction"/>
|
|
|
|
</member>
|
|
<member name="F:AForge.Neuro.ActivationNeuron.threshold">
|
|
<summary>
|
|
Threshold value.
|
|
</summary>
|
|
|
|
<remarks>The value is added to inputs weighted sum before it is passed to activation
|
|
function.</remarks>
|
|
|
|
</member>
|
|
<member name="F:AForge.Neuro.ActivationNeuron.function">
|
|
<summary>
|
|
Activation function.
|
|
</summary>
|
|
|
|
<remarks>The function is applied to inputs weighted sum plus
|
|
threshold value.</remarks>
|
|
|
|
</member>
|
|
<member name="P:AForge.Neuro.ActivationNeuron.Threshold">
|
|
<summary>
|
|
Threshold value.
|
|
</summary>
|
|
|
|
<remarks>The value is added to inputs weighted sum before it is passed to activation
|
|
function.</remarks>
|
|
|
|
</member>
|
|
<member name="P:AForge.Neuro.ActivationNeuron.ActivationFunction">
|
|
<summary>
|
|
Neuron's activation function.
|
|
</summary>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.ActivationNeuron.#ctor(System.Int32,AForge.Neuro.IActivationFunction)">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.ActivationNeuron"/> class.
|
|
</summary>
|
|
|
|
<param name="inputs">Neuron's inputs count.</param>
|
|
<param name="function">Neuron's activation function.</param>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.ActivationNeuron.Randomize">
|
|
<summary>
|
|
Randomize neuron.
|
|
</summary>
|
|
|
|
<remarks>Calls base class <see cref="M:AForge.Neuro.Neuron.Randomize">Randomize</see> method
|
|
to randomize neuron's weights and then randomizes threshold's value.</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.ActivationNeuron.Compute(System.Double[])">
|
|
<summary>
|
|
Computes output value of neuron.
|
|
</summary>
|
|
|
|
<param name="input">Input vector.</param>
|
|
|
|
<returns>Returns neuron's output value.</returns>
|
|
|
|
<remarks><para>The output value of activation neuron is equal to value
|
|
of nueron's activation function, which parameter is weighted sum
|
|
of its inputs plus threshold value. The output value is also stored
|
|
in <see cref="P:AForge.Neuro.Neuron.Output">Output</see> property.</para>
|
|
|
|
<para><note>The method may be called safely from multiple threads to compute neuron's
|
|
output value for the specified input values. However, the value of
|
|
<see cref="P:AForge.Neuro.Neuron.Output"/> property in multi-threaded environment is not predictable,
|
|
since it may hold neuron's output computed from any of the caller threads. Multi-threaded
|
|
access to the method is useful in those cases when it is required to improve performance
|
|
by utilizing several threads and the computation is based on the immediate return value
|
|
of the method, but not on neuron's output property.</note></para>
|
|
</remarks>
|
|
|
|
<exception cref="T:System.ArgumentException">Wrong length of the input vector, which is not
|
|
equal to the <see cref="P:AForge.Neuro.Neuron.InputsCount">expected value</see>.</exception>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.DistanceNeuron">
|
|
<summary>
|
|
Distance neuron.
|
|
</summary>
|
|
|
|
<remarks><para>Distance neuron computes its output as distance between
|
|
its weights and inputs - sum of absolute differences between weights'
|
|
values and corresponding inputs' values. The neuron is usually used in Kohonen
|
|
Self Organizing Map.</para></remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.DistanceNeuron.#ctor(System.Int32)">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.DistanceNeuron"/> class.
|
|
</summary>
|
|
|
|
<param name="inputs">Neuron's inputs count.</param>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.DistanceNeuron.Compute(System.Double[])">
|
|
<summary>
|
|
Computes output value of neuron.
|
|
</summary>
|
|
|
|
<param name="input">Input vector.</param>
|
|
|
|
<returns>Returns neuron's output value.</returns>
|
|
|
|
<remarks><para>The output value of distance neuron is equal to the distance
|
|
between its weights and inputs - sum of absolute differences.
|
|
The output value is also stored in <see cref="P:AForge.Neuro.Neuron.Output">Output</see>
|
|
property.</para>
|
|
|
|
<para><note>The method may be called safely from multiple threads to compute neuron's
|
|
output value for the specified input values. However, the value of
|
|
<see cref="P:AForge.Neuro.Neuron.Output"/> property in multi-threaded environment is not predictable,
|
|
since it may hold neuron's output computed from any of the caller threads. Multi-threaded
|
|
access to the method is useful in those cases when it is required to improve performance
|
|
by utilizing several threads and the computation is based on the immediate return value
|
|
of the method, but not on neuron's output property.</note></para>
|
|
</remarks>
|
|
|
|
<exception cref="T:System.ArgumentException">Wrong length of the input vector, which is not
|
|
equal to the <see cref="P:AForge.Neuro.Neuron.InputsCount">expected value</see>.</exception>
|
|
|
|
</member>
|
|
<member name="T:AForge.Neuro.Neuron">
|
|
<summary>
|
|
Base neuron class.
|
|
</summary>
|
|
|
|
<remarks>This is a base neuron class, which encapsulates such
|
|
common properties, like neuron's input, output and weights.</remarks>
|
|
|
|
</member>
|
|
<member name="F:AForge.Neuro.Neuron.inputsCount">
|
|
<summary>
|
|
Neuron's inputs count.
|
|
</summary>
|
|
</member>
|
|
<member name="F:AForge.Neuro.Neuron.weights">
|
|
<summary>
|
|
Nouron's wieghts.
|
|
</summary>
|
|
</member>
|
|
<member name="F:AForge.Neuro.Neuron.output">
|
|
<summary>
|
|
Neuron's output value.
|
|
</summary>
|
|
</member>
|
|
<member name="F:AForge.Neuro.Neuron.rand">
|
|
<summary>
|
|
Random number generator.
|
|
</summary>
|
|
|
|
<remarks>The generator is used for neuron's weights randomization.</remarks>
|
|
|
|
</member>
|
|
<member name="F:AForge.Neuro.Neuron.randRange">
|
|
<summary>
|
|
Random generator range.
|
|
</summary>
|
|
|
|
<remarks>Sets the range of random generator. Affects initial values of neuron's weight.
|
|
Default value is [0, 1].</remarks>
|
|
|
|
</member>
|
|
<member name="P:AForge.Neuro.Neuron.RandGenerator">
|
|
<summary>
|
|
Random number generator.
|
|
</summary>
|
|
|
|
<remarks>The property allows to initialize random generator with a custom seed. The generator is
|
|
used for neuron's weights randomization.</remarks>
|
|
|
|
</member>
|
|
<member name="P:AForge.Neuro.Neuron.RandRange">
|
|
<summary>
|
|
Random generator range.
|
|
</summary>
|
|
|
|
<remarks>Sets the range of random generator. Affects initial values of neuron's weight.
|
|
Default value is [0, 1].</remarks>
|
|
|
|
</member>
|
|
<member name="P:AForge.Neuro.Neuron.InputsCount">
|
|
<summary>
|
|
Neuron's inputs count.
|
|
</summary>
|
|
</member>
|
|
<member name="P:AForge.Neuro.Neuron.Output">
|
|
<summary>
|
|
Neuron's output value.
|
|
</summary>
|
|
|
|
<remarks>The calculation way of neuron's output value is determined by inherited class.</remarks>
|
|
|
|
</member>
|
|
<member name="P:AForge.Neuro.Neuron.Weights">
|
|
<summary>
|
|
Neuron's weights.
|
|
</summary>
|
|
</member>
|
|
<member name="M:AForge.Neuro.Neuron.#ctor(System.Int32)">
|
|
<summary>
|
|
Initializes a new instance of the <see cref="T:AForge.Neuro.Neuron"/> class.
|
|
</summary>
|
|
|
|
<param name="inputs">Neuron's inputs count.</param>
|
|
|
|
<remarks>The new neuron will be randomized (see <see cref="M:AForge.Neuro.Neuron.Randomize"/> method)
|
|
after it is created.</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Neuron.Randomize">
|
|
<summary>
|
|
Randomize neuron.
|
|
</summary>
|
|
|
|
<remarks>Initialize neuron's weights with random values within the range specified
|
|
by <see cref="P:AForge.Neuro.Neuron.RandRange"/>.</remarks>
|
|
|
|
</member>
|
|
<member name="M:AForge.Neuro.Neuron.Compute(System.Double[])">
|
|
<summary>
|
|
Computes output value of neuron.
|
|
</summary>
|
|
|
|
<param name="input">Input vector.</param>
|
|
|
|
<returns>Returns neuron's output value.</returns>
|
|
|
|
<remarks>The actual neuron's output value is determined by inherited class.
|
|
The output value is also stored in <see cref="P:AForge.Neuro.Neuron.Output"/> property.</remarks>
|
|
|
|
</member>
|
|
</members>
|
|
</doc>
|