AForge.Neuro Unsupervised learning interface. The interface describes methods, which should be implemented by all unsupervised learning algorithms. Unsupervised learning is such type of learning algorithms, where system's desired output is not known on the learning stage. Given sample input values, it is expected, that system will organize itself in the way to find similarities betweed provided samples. Runs learning iteration. Input vector. Returns learning error. Runs learning epoch. Array of input vectors. Returns sum of learning errors. Sigmoid activation function. The class represents sigmoid activation function with the next expression: 1 f(x) = ------------------ 1 + exp(-alpha * x) alpha * exp(-alpha * x ) f'(x) = ---------------------------- = alpha * f(x) * (1 - f(x)) (1 + exp(-alpha * x))^2 Output range of the function: [0, 1]. Functions graph: Activation function interface. All activation functions, which are supposed to be used with neurons, which calculate their output as a function of weighted sum of their inputs, should implement this interfaces. Calculates function value. Function input value. Function output value, f(x). The method calculates function value at point . Calculates function derivative. Function input value. Function derivative, f'(x). The method calculates function derivative at point . Calculates function derivative. Function output value - the value, which was obtained with the help of method. Function derivative, f'(x). The method calculates the same derivative value as the method, but it takes not the input x value itself, but the function value, which was calculated previously with the help of method. Some applications require as function value, as derivative value, so they can save the amount of calculations using this method to calculate derivative. Initializes a new instance of the class. Initializes a new instance of the class. Sigmoid's alpha value. Calculates function value. Function input value. Function output value, f(x). The method calculates function value at point . Calculates function derivative. Function input value. Function derivative, f'(x). The method calculates function derivative at point . Calculates function derivative. Function output value - the value, which was obtained with the help of method. Function derivative, f'(x). The method calculates the same derivative value as the method, but it takes not the input x value itself, but the function value, which was calculated previously with the help of method. Some applications require as function value, as derivative value, so they can save the amount of calculations using this method to calculate derivative. Creates a new object that is a copy of the current instance. A new object that is a copy of this instance. Sigmoid's alpha value. The value determines steepness of the function. Increasing value of this property changes sigmoid to look more like a threshold function. Decreasing value of this property makes sigmoid to be very smooth (slowly growing from its minimum value to its maximum value). Default value is set to 2. Base neural layer class. This is a base neural layer class, which represents collection of neurons. Layer's inputs count. Layer's neurons count. Layer's neurons. Layer's output vector. Initializes a new instance of the class. Layer's neurons count. Layer's inputs count. Protected contructor, which initializes , and members. Compute output vector of the layer. Input vector. Returns layer's output vector. The actual layer's output vector is determined by neurons, which comprise the layer - consists of output values of layer's neurons. The output vector is also stored in property. The method may be called safely from multiple threads to compute layer's output value for the specified input values. However, the value of property in multi-threaded environment is not predictable, since it may hold layer's output computed from any of the caller threads. Multi-threaded access to the method is useful in those cases when it is required to improve performance by utilizing several threads and the computation is based on the immediate return value of the method, but not on layer's output property. Randomize neurons of the layer. Randomizes layer's neurons by calling method of each neuron. Layer's inputs count. Layer's neurons. Layer's output vector. The calculation way of layer's output vector is determined by neurons, which comprise the layer. The property is not initialized (equals to ) until method is called. Bipolar sigmoid activation function. The class represents bipolar sigmoid activation function with the next expression: 2 f(x) = ------------------ - 1 1 + exp(-alpha * x) 2 * alpha * exp(-alpha * x ) f'(x) = -------------------------------- = alpha * (1 - f(x)^2) / 2 (1 + exp(-alpha * x))^2 Output range of the function: [-1, 1]. Functions graph: Initializes a new instance of the class. Initializes a new instance of the class. Sigmoid's alpha value. Calculates function value. Function input value. Function output value, f(x). The method calculates function value at point . Calculates function derivative. Function input value. Function derivative, f'(x). The method calculates function derivative at point . Calculates function derivative. Function output value - the value, which was obtained with the help of method. Function derivative, f'(x). The method calculates the same derivative value as the method, but it takes not the input x value itself, but the function value, which was calculated previously with the help of method. Some applications require as function value, as derivative value, so they can save the amount of calculations using this method to calculate derivative. Creates a new object that is a copy of the current instance. A new object that is a copy of this instance. Sigmoid's alpha value. The value determines steepness of the function. Increasing value of this property changes sigmoid to look more like a threshold function. Decreasing value of this property makes sigmoid to be very smooth (slowly growing from its minimum value to its maximum value). Default value is set to 2. Kohonen Self Organizing Map (SOM) learning algorithm. This class implements Kohonen's SOM learning algorithm and is widely used in clusterization tasks. The class allows to train Distance Networks. Sample usage (clustering RGB colors): // set range for randomization neurons' weights Neuron.RandRange = new Range( 0, 255 ); // create network DistanceNetwork network = new DistanceNetwork( 3, // thress inputs in the network 100 * 100 ); // 10000 neurons // create learning algorithm SOMLearning trainer = new SOMLearning( network ); // network's input double[] input = new double[3]; // loop while ( !needToStop ) { input[0] = rand.Next( 256 ); input[1] = rand.Next( 256 ); input[2] = rand.Next( 256 ); trainer.Run( input ); // ... // update learning rate and radius continuously, // so networks may come steady state } Initializes a new instance of the class. Neural network to train. This constructor supposes that a square network will be passed for training - it should be possible to get square root of network's neurons amount. Invalid network size - square network is expected. Initializes a new instance of the class. Neural network to train. Neural network's width. Neural network's height. The constructor allows to pass network of arbitrary rectangular shape. The amount of neurons in the network should be equal to width * height. Invalid network size - network size does not correspond to specified width and height. Runs learning iteration. Input vector. Returns learning error - summary absolute difference between neurons' weights and appropriate inputs. The difference is measured according to the neurons distance to the winner neuron. The method runs one learning iterations - finds winner neuron (the neuron which has weights with values closest to the specified input vector) and updates its weight (as well as weights of neighbor neurons) in the way to decrease difference with the specified input vector. Runs learning epoch. Array of input vectors. Returns summary learning error for the epoch. See method for details about learning error calculation. The method runs one learning epoch, by calling method for each vector provided in the array. Learning rate, [0, 1]. Determines speed of learning. Default value equals to 0.1. Learning radius. Determines the amount of neurons to be updated around winner neuron. Neurons, which are in the circle of specified radius, are updated during the learning procedure. Neurons, which are closer to the winner neuron, get more update. In the case if learning rate is set to 0, then only winner neuron's weights are updated. Default value equals to 7. Back propagation learning algorithm. The class implements back propagation learning algorithm, which is widely used for training multi-layer neural networks with continuous activation functions. Sample usage (training network to calculate XOR function): // initialize input and output values double[][] input = new double[4][] { new double[] {0, 0}, new double[] {0, 1}, new double[] {1, 0}, new double[] {1, 1} }; double[][] output = new double[4][] { new double[] {0}, new double[] {1}, new double[] {1}, new double[] {0} }; // create neural network ActivationNetwork network = new ActivationNetwork( SigmoidFunction( 2 ), 2, // two inputs in the network 2, // two neurons in the first layer 1 ); // one neuron in the second layer // create teacher BackPropagationLearning teacher = new BackPropagationLearning( network ); // loop while ( !needToStop ) { // run epoch of learning procedure double error = teacher.RunEpoch( input, output ); // check error value to see if we need to stop // ... } Supervised learning interface. The interface describes methods, which should be implemented by all supervised learning algorithms. Supervised learning is such type of learning algorithms, where system's desired output is known on the learning stage. So, given sample input values and desired outputs, system should adopt its internals to produce correct (or close to correct) result after the learning step is complete. Runs learning iteration. Input vector. Desired output vector. Returns learning error. Runs learning epoch. Array of input vectors. Array of output vectors. Returns sum of learning errors. Initializes a new instance of the class. Network to teach. Runs learning iteration. Input vector. Desired output vector. Returns squared error (difference between current network's output and desired output) divided by 2. Runs one learning iteration and updates neuron's weights. Runs learning epoch. Array of input vectors. Array of output vectors. Returns summary learning error for the epoch. See method for details about learning error calculation. The method runs one learning epoch, by calling method for each vector provided in the array. Calculates error values for all neurons of the network. Desired output vector. Returns summary squared error of the last layer divided by 2. Calculate weights updates. Network's input vector. Update network'sweights. Learning rate, [0, 1]. The value determines speed of learning. Default value equals to 0.1. Momentum, [0, 1]. The value determines the portion of previous weight's update to use on current iteration. Weight's update values are calculated on each iteration depending on neuron's error. The momentum specifies the amount of update to use from previous iteration and the amount of update to use from current iteration. If the value is equal to 0.1, for example, then 0.1 portion of previous update and 0.9 portion of current update are used to update weight's value. Default value equals to 0.0. Perceptron learning algorithm. This learning algorithm is used to train one layer neural network of Activation Neurons with the Threshold activation function. See information about Perceptron and its learning algorithm. Initializes a new instance of the class. Network to teach. Invalid nuaral network. It should have one layer only. Runs learning iteration. Input vector. Desired output vector. Returns absolute error - difference between current network's output and desired output. Runs one learning iteration and updates neuron's weights in the case if neuron's output is not equal to the desired output. Runs learning epoch. Array of input vectors. Array of output vectors. Returns summary learning error for the epoch. See method for details about learning error calculation. The method runs one learning epoch, by calling method for each vector provided in the array. Learning rate, [0, 1]. The value determines speed of learning. Default value equals to 0.1. Threshold activation function. The class represents threshold activation function with the next expression: f(x) = 1, if x >= 0, otherwise 0 Output range of the function: [0, 1]. Functions graph: Initializes a new instance of the class. Calculates function value. Function input value. Function output value, f(x). The method calculates function value at point . Calculates function derivative (not supported). Input value. Always returns 0. The method is not supported, because it is not possible to calculate derivative of the function. Calculates function derivative (not supported). Input value. Always returns 0. The method is not supported, because it is not possible to calculate derivative of the function. Creates a new object that is a copy of the current instance. A new object that is a copy of this instance. Base neural network class. This is a base neural netwok class, which represents collection of neuron's layers. Network's inputs count. Network's layers count. Network's layers. Network's output vector. Initializes a new instance of the class. Network's inputs count. Network's layers count. Protected constructor, which initializes , and members. Compute output vector of the network. Input vector. Returns network's output vector. The actual network's output vecor is determined by layers, which comprise the layer - represents an output vector of the last layer of the network. The output vector is also stored in property. The method may be called safely from multiple threads to compute network's output value for the specified input values. However, the value of property in multi-threaded environment is not predictable, since it may hold network's output computed from any of the caller threads. Multi-threaded access to the method is useful in those cases when it is required to improve performance by utilizing several threads and the computation is based on the immediate return value of the method, but not on network's output property. Randomize layers of the network. Randomizes network's layers by calling method of each layer. Save network to specified file. File name to save network into. The neural network is saved using .NET serialization (binary formatter is used). Save network to specified file. Stream to save network into. The neural network is saved using .NET serialization (binary formatter is used). Load network from specified file. File name to load network from. Returns instance of class with all properties initialized from file. Neural network is loaded from file using .NET serialization (binary formater is used). Load network from specified file. Stream to load network from. Returns instance of class with all properties initialized from file. Neural network is loaded from file using .NET serialization (binary formater is used). Network's inputs count. Network's layers. Network's output vector. The calculation way of network's output vector is determined by layers, which comprise the network. The property is not initialized (equals to ) until method is called. Distance network. Distance network is a neural network of only one distance layer. The network is a base for such neural networks as SOM, Elastic net, etc. Initializes a new instance of the class. Network's inputs count. Network's neurons count. The new network is randomized (see method) after it is created. Get winner neuron. Index of the winner neuron. The method returns index of the neuron, which weights have the minimum distance from network's input. Fitness function used for chromosomes representing collection of neural network's weights. Initializes a new instance of the class. Neural network for which fitness will be calculated. Input data samples for neural network. Output data sampels for neural network (desired output). Length of inputs and outputs arrays must be equal and greater than 0. Length of each input vector must be equal to neural network's inputs count. Evaluates chromosome. Chromosome to evaluate. Returns chromosome's fitness value. The method calculates fitness value of the specified chromosome. Delta rule learning algorithm. This learning algorithm is used to train one layer neural network of Activation Neurons with continuous activation function, see for example. See information about delta rule learning algorithm. Initializes a new instance of the class. Network to teach. Invalid nuaral network. It should have one layer only. Runs learning iteration. Input vector. Desired output vector. Returns squared error (difference between current network's output and desired output) divided by 2. Runs one learning iteration and updates neuron's weights. Runs learning epoch. Array of input vectors. Array of output vectors. Returns summary learning error for the epoch. See method for details about learning error calculation. The method runs one learning epoch, by calling method for each vector provided in the array. Learning rate, [0, 1]. The value determines speed of learning. Default value equals to 0.1. Elastic network learning algorithm. This class implements elastic network's learning algorithm and allows to train Distance Networks. Initializes a new instance of the class. Neural network to train. Runs learning iteration. Input vector. Returns learning error - summary absolute difference between neurons' weights and appropriate inputs. The difference is measured according to the neurons distance to the winner neuron. The method runs one learning iterations - finds winner neuron (the neuron which has weights with values closest to the specified input vector) and updates its weight (as well as weights of neighbor neurons) in the way to decrease difference with the specified input vector. Runs learning epoch. Array of input vectors. Returns summary learning error for the epoch. See method for details about learning error calculation. The method runs one learning epoch, by calling method for each vector provided in the array. Learning rate, [0, 1]. Determines speed of learning. Default value equals to 0.1. Learning radius, [0, 1]. Determines the amount of neurons to be updated around winner neuron. Neurons, which are in the circle of specified radius, are updated during the learning procedure. Neurons, which are closer to the winner neuron, get more update. Default value equals to 0.5. Base neuron class. This is a base neuron class, which encapsulates such common properties, like neuron's input, output and weights. Neuron's inputs count. Nouron's wieghts. Neuron's output value. Random number generator. The generator is used for neuron's weights randomization. Random generator range. Sets the range of random generator. Affects initial values of neuron's weight. Default value is [0, 1]. Initializes a new instance of the class. Neuron's inputs count. The new neuron will be randomized (see method) after it is created. Randomize neuron. Initialize neuron's weights with random values within the range specified by . Computes output value of neuron. Input vector. Returns neuron's output value. The actual neuron's output value is determined by inherited class. The output value is also stored in property. Random number generator. The property allows to initialize random generator with a custom seed. The generator is used for neuron's weights randomization. Random generator range. Sets the range of random generator. Affects initial values of neuron's weight. Default value is [0, 1]. Neuron's inputs count. Neuron's output value. The calculation way of neuron's output value is determined by inherited class. Neuron's weights. Activation layer. Activation layer is a layer of activation neurons. The layer is usually used in multi-layer neural networks. Initializes a new instance of the class. Layer's neurons count. Layer's inputs count. Activation function of neurons of the layer. The new layer is randomized (see method) after it is created. Set new activation function for all neurons of the layer. Activation function to set. The methods sets new activation function for each neuron by setting their property. Activation neuron. Activation neuron computes weighted sum of its inputs, adds threshold value and then applies activation function. The neuron isusually used in multi-layer neural networks. Threshold value. The value is added to inputs weighted sum before it is passed to activation function. Activation function. The function is applied to inputs weighted sum plus threshold value. Initializes a new instance of the class. Neuron's inputs count. Neuron's activation function. Randomize neuron. Calls base class Randomize method to randomize neuron's weights and then randomizes threshold's value. Computes output value of neuron. Input vector. Returns neuron's output value. The output value of activation neuron is equal to value of nueron's activation function, which parameter is weighted sum of its inputs plus threshold value. The output value is also stored in Output property. The method may be called safely from multiple threads to compute neuron's output value for the specified input values. However, the value of property in multi-threaded environment is not predictable, since it may hold neuron's output computed from any of the caller threads. Multi-threaded access to the method is useful in those cases when it is required to improve performance by utilizing several threads and the computation is based on the immediate return value of the method, but not on neuron's output property. Wrong length of the input vector, which is not equal to the expected value. Threshold value. The value is added to inputs weighted sum before it is passed to activation function. Neuron's activation function. Activation network. Activation network is a base for multi-layer neural network with activation functions. It consists of activation layers. Sample usage: // create activation network ActivationNetwork network = new ActivationNetwork( new SigmoidFunction( ), // sigmoid activation function 3, // 3 inputs 4, 1 ); // 2 layers: // 4 neurons in the firs layer // 1 neuron in the second layer Initializes a new instance of the class. Activation function of neurons of the network. Network's inputs count. Array, which specifies the amount of neurons in each layer of the neural network. The new network is randomized (see method) after it is created. Set new activation function for all neurons of the network. Activation function to set. The method sets new activation function for all neurons by calling method for each layer of the network. Neural networks' evolutionary learning algorithm, which is based on Genetic Algorithms. The class implements supervised neural network's learning algorithm, which is based on Genetic Algorithms. For the given neural network, it create a population of chromosomes, which represent neural network's weights. Then, during the learning process, the genetic population evolves and weights, which are represented by the best chromosome, are set to the source neural network. See class for additional information about genetic population and evolutionary based search. Sample usage (training network to calculate XOR function): // initialize input and output values double[][] input = new double[4][] { new double[] {-1, 1}, new double[] {-1, 1}, new double[] { 1, -1}, new double[] { 1, 1} }; double[][] output = new double[4][] { new double[] {-1}, new double[] { 1}, new double[] { 1}, new double[] {-1} }; // create neural network ActivationNetwork network = new ActivationNetwork( BipolarSigmoidFunction( 2 ), 2, // two inputs in the network 2, // two neurons in the first layer 1 ); // one neuron in the second layer // create teacher EvolutionaryLearning teacher = new EvolutionaryLearning( network, 100 ); // number of chromosomes in genetic population // loop while ( !needToStop ) { // run epoch of learning procedure double error = teacher.RunEpoch( input, output ); // check error value to see if we need to stop // ... } Initializes a new instance of the class. Activation network to be trained. Size of genetic population. Random numbers generator used for initialization of genetic population representing neural network's weights and thresholds (see ). Random numbers generator used to generate random factors for multiplication of network's weights and thresholds during genetic mutation (ses .) Random numbers generator used to generate random values added to neural network's weights and thresholds during genetic mutation (see ). Method of selection best chromosomes in genetic population. Crossover rate in genetic population (see ). Mutation rate in genetic population (see ). Rate of injection of random chromosomes during selection in genetic population (see ). Initializes a new instance of the class. Activation network to be trained. Size of genetic population. This version of constructor is used to create genetic population for searching optimal neural network's weight using default set of parameters, which are: Selection method - elite; Crossover rate - 0.75; Mutation rate - 0.25; Rate of injection of random chromosomes during selection - 0.20; Random numbers generator for initializing new chromosome - UniformGenerator( new Range( -1, 1 ) ); Random numbers generator used during mutation for genes' multiplication - ExponentialGenerator( 1 ); Random numbers generator used during mutation for adding random value to genes - UniformGenerator( new Range( -0.5f, 0.5f ) ). In order to have full control over the above default parameters, it is possible to used extended version of constructor, which allows to specify all of the parameters. Runs learning iteration. Input vector. Desired output vector. Returns learning error. The method is not implemented, since evolutionary learning algorithm is global and requires all inputs/outputs in order to run its one epoch. Use method instead. The method is not implemented by design. Runs learning epoch. Array of input vectors. Array of output vectors. Returns summary squared learning error for the entire epoch. While running the neural network's learning process, it is required to pass the same and values for each epoch. On the very first run of the method it will initialize evolutionary fitness function with the given input/output. So, changing input/output in middle of the learning process, will break it. Distance layer. Distance layer is a layer of distance neurons. The layer is usually a single layer of such networks as Kohonen Self Organizing Map, Elastic Net, Hamming Memory Net. Initializes a new instance of the class. Layer's neurons count. Layer's inputs count. The new layet is randomized (see method) after it is created. Resilient Backpropagation learning algorithm. This class implements the resilient backpropagation (RProp) learning algorithm. The RProp learning algorithm is one of the fastest learning algorithms for feed-forward learning networks which use only first-order information. Sample usage (training network to calculate XOR function): // initialize input and output values double[][] input = new double[4][] { new double[] {0, 0}, new double[] {0, 1}, new double[] {1, 0}, new double[] {1, 1} }; double[][] output = new double[4][] { new double[] {0}, new double[] {1}, new double[] {1}, new double[] {0} }; // create neural network ActivationNetwork network = new ActivationNetwork( SigmoidFunction( 2 ), 2, // two inputs in the network 2, // two neurons in the first layer 1 ); // one neuron in the second layer // create teacher ResilientBackpropagationLearning teacher = new ResilientBackpropagationLearning( network ); // loop while ( !needToStop ) { // run epoch of learning procedure double error = teacher.RunEpoch( input, output ); // check error value to see if we need to stop // ... } Initializes a new instance of the class. Network to teach. Runs learning iteration. Input vector. Desired output vector. Returns squared error (difference between current network's output and desired output) divided by 2. Runs one learning iteration and updates neuron's weights. Runs learning epoch. Array of input vectors. Array of output vectors. Returns summary learning error for the epoch. See method for details about learning error calculation. The method runs one learning epoch, by calling method for each vector provided in the array. Resets current weight and threshold derivatives. Resets the current update steps using the given learning rate. Update network's weights. Calculates error values for all neurons of the network. Desired output vector. Returns summary squared error of the last layer divided by 2. Calculate weights updates Network's input vector. Learning rate. The value determines speed of learning. Default value equals to 0.0125. Distance neuron. Distance neuron computes its output as distance between its weights and inputs - sum of absolute differences between weights' values and corresponding inputs' values. The neuron is usually used in Kohonen Self Organizing Map. Initializes a new instance of the class. Neuron's inputs count. Computes output value of neuron. Input vector. Returns neuron's output value. The output value of distance neuron is equal to the distance between its weights and inputs - sum of absolute differences. The output value is also stored in Output property. The method may be called safely from multiple threads to compute neuron's output value for the specified input values. However, the value of property in multi-threaded environment is not predictable, since it may hold neuron's output computed from any of the caller threads. Multi-threaded access to the method is useful in those cases when it is required to improve performance by utilizing several threads and the computation is based on the immediate return value of the method, but not on neuron's output property. Wrong length of the input vector, which is not equal to the expected value.