You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1792 lines
80 KiB

1 month ago
  1. <?xml version="1.0"?>
  2. <doc>
  3. <assembly>
  4. <name>AForge.Neuro</name>
  5. </assembly>
  6. <members>
  7. <member name="T:AForge.Neuro.Learning.IUnsupervisedLearning">
  8. <summary>
  9. Unsupervised learning interface.
  10. </summary>
  11. <remarks><para>The interface describes methods, which should be implemented
  12. by all unsupervised learning algorithms. Unsupervised learning is such
  13. type of learning algorithms, where system's desired output is not known on
  14. the learning stage. Given sample input values, it is expected, that
  15. system will organize itself in the way to find similarities betweed provided
  16. samples.</para></remarks>
  17. </member>
  18. <member name="M:AForge.Neuro.Learning.IUnsupervisedLearning.Run(System.Double[])">
  19. <summary>
  20. Runs learning iteration.
  21. </summary>
  22. <param name="input">Input vector.</param>
  23. <returns>Returns learning error.</returns>
  24. </member>
  25. <member name="M:AForge.Neuro.Learning.IUnsupervisedLearning.RunEpoch(System.Double[][])">
  26. <summary>
  27. Runs learning epoch.
  28. </summary>
  29. <param name="input">Array of input vectors.</param>
  30. <returns>Returns sum of learning errors.</returns>
  31. </member>
  32. <member name="T:AForge.Neuro.SigmoidFunction">
  33. <summary>
  34. Sigmoid activation function.
  35. </summary>
  36. <remarks><para>The class represents sigmoid activation function with
  37. the next expression:
  38. <code lang="none">
  39. 1
  40. f(x) = ------------------
  41. 1 + exp(-alpha * x)
  42. alpha * exp(-alpha * x )
  43. f'(x) = ---------------------------- = alpha * f(x) * (1 - f(x))
  44. (1 + exp(-alpha * x))^2
  45. </code>
  46. </para>
  47. <para>Output range of the function: <b>[0, 1]</b>.</para>
  48. <para>Functions graph:</para>
  49. <img src="img/neuro/sigmoid.bmp" width="242" height="172" />
  50. </remarks>
  51. </member>
  52. <member name="T:AForge.Neuro.IActivationFunction">
  53. <summary>
  54. Activation function interface.
  55. </summary>
  56. <remarks>All activation functions, which are supposed to be used with
  57. neurons, which calculate their output as a function of weighted sum of
  58. their inputs, should implement this interfaces.
  59. </remarks>
  60. </member>
  61. <member name="M:AForge.Neuro.IActivationFunction.Function(System.Double)">
  62. <summary>
  63. Calculates function value.
  64. </summary>
  65. <param name="x">Function input value.</param>
  66. <returns>Function output value, <i>f(x)</i>.</returns>
  67. <remarks>The method calculates function value at point <paramref name="x"/>.</remarks>
  68. </member>
  69. <member name="M:AForge.Neuro.IActivationFunction.Derivative(System.Double)">
  70. <summary>
  71. Calculates function derivative.
  72. </summary>
  73. <param name="x">Function input value.</param>
  74. <returns>Function derivative, <i>f'(x)</i>.</returns>
  75. <remarks>The method calculates function derivative at point <paramref name="x"/>.</remarks>
  76. </member>
  77. <member name="M:AForge.Neuro.IActivationFunction.Derivative2(System.Double)">
  78. <summary>
  79. Calculates function derivative.
  80. </summary>
  81. <param name="y">Function output value - the value, which was obtained
  82. with the help of <see cref="M:AForge.Neuro.IActivationFunction.Function(System.Double)"/> method.</param>
  83. <returns>Function derivative, <i>f'(x)</i>.</returns>
  84. <remarks><para>The method calculates the same derivative value as the
  85. <see cref="M:AForge.Neuro.IActivationFunction.Derivative(System.Double)"/> method, but it takes not the input <b>x</b> value
  86. itself, but the function value, which was calculated previously with
  87. the help of <see cref="M:AForge.Neuro.IActivationFunction.Function(System.Double)"/> method.</para>
  88. <para><note>Some applications require as function value, as derivative value,
  89. so they can save the amount of calculations using this method to calculate derivative.</note></para>
  90. </remarks>
  91. </member>
  92. <member name="M:AForge.Neuro.SigmoidFunction.#ctor">
  93. <summary>
  94. Initializes a new instance of the <see cref="T:AForge.Neuro.SigmoidFunction"/> class.
  95. </summary>
  96. </member>
  97. <member name="M:AForge.Neuro.SigmoidFunction.#ctor(System.Double)">
  98. <summary>
  99. Initializes a new instance of the <see cref="T:AForge.Neuro.SigmoidFunction"/> class.
  100. </summary>
  101. <param name="alpha">Sigmoid's alpha value.</param>
  102. </member>
  103. <member name="M:AForge.Neuro.SigmoidFunction.Function(System.Double)">
  104. <summary>
  105. Calculates function value.
  106. </summary>
  107. <param name="x">Function input value.</param>
  108. <returns>Function output value, <i>f(x)</i>.</returns>
  109. <remarks>The method calculates function value at point <paramref name="x"/>.</remarks>
  110. </member>
  111. <member name="M:AForge.Neuro.SigmoidFunction.Derivative(System.Double)">
  112. <summary>
  113. Calculates function derivative.
  114. </summary>
  115. <param name="x">Function input value.</param>
  116. <returns>Function derivative, <i>f'(x)</i>.</returns>
  117. <remarks>The method calculates function derivative at point <paramref name="x"/>.</remarks>
  118. </member>
  119. <member name="M:AForge.Neuro.SigmoidFunction.Derivative2(System.Double)">
  120. <summary>
  121. Calculates function derivative.
  122. </summary>
  123. <param name="y">Function output value - the value, which was obtained
  124. with the help of <see cref="M:AForge.Neuro.SigmoidFunction.Function(System.Double)"/> method.</param>
  125. <returns>Function derivative, <i>f'(x)</i>.</returns>
  126. <remarks><para>The method calculates the same derivative value as the
  127. <see cref="M:AForge.Neuro.SigmoidFunction.Derivative(System.Double)"/> method, but it takes not the input <b>x</b> value
  128. itself, but the function value, which was calculated previously with
  129. the help of <see cref="M:AForge.Neuro.SigmoidFunction.Function(System.Double)"/> method.</para>
  130. <para><note>Some applications require as function value, as derivative value,
  131. so they can save the amount of calculations using this method to calculate derivative.</note></para>
  132. </remarks>
  133. </member>
  134. <member name="M:AForge.Neuro.SigmoidFunction.Clone">
  135. <summary>
  136. Creates a new object that is a copy of the current instance.
  137. </summary>
  138. <returns>
  139. A new object that is a copy of this instance.
  140. </returns>
  141. </member>
  142. <member name="P:AForge.Neuro.SigmoidFunction.Alpha">
  143. <summary>
  144. Sigmoid's alpha value.
  145. </summary>
  146. <remarks><para>The value determines steepness of the function. Increasing value of
  147. this property changes sigmoid to look more like a threshold function. Decreasing
  148. value of this property makes sigmoid to be very smooth (slowly growing from its
  149. minimum value to its maximum value).</para>
  150. <para>Default value is set to <b>2</b>.</para>
  151. </remarks>
  152. </member>
  153. <member name="T:AForge.Neuro.Layer">
  154. <summary>
  155. Base neural layer class.
  156. </summary>
  157. <remarks>This is a base neural layer class, which represents
  158. collection of neurons.</remarks>
  159. </member>
  160. <member name="F:AForge.Neuro.Layer.inputsCount">
  161. <summary>
  162. Layer's inputs count.
  163. </summary>
  164. </member>
  165. <member name="F:AForge.Neuro.Layer.neuronsCount">
  166. <summary>
  167. Layer's neurons count.
  168. </summary>
  169. </member>
  170. <member name="F:AForge.Neuro.Layer.neurons">
  171. <summary>
  172. Layer's neurons.
  173. </summary>
  174. </member>
  175. <member name="F:AForge.Neuro.Layer.output">
  176. <summary>
  177. Layer's output vector.
  178. </summary>
  179. </member>
  180. <member name="M:AForge.Neuro.Layer.#ctor(System.Int32,System.Int32)">
  181. <summary>
  182. Initializes a new instance of the <see cref="T:AForge.Neuro.Layer"/> class.
  183. </summary>
  184. <param name="neuronsCount">Layer's neurons count.</param>
  185. <param name="inputsCount">Layer's inputs count.</param>
  186. <remarks>Protected contructor, which initializes <see cref="F:AForge.Neuro.Layer.inputsCount"/>,
  187. <see cref="F:AForge.Neuro.Layer.neuronsCount"/> and <see cref="F:AForge.Neuro.Layer.neurons"/> members.</remarks>
  188. </member>
  189. <member name="M:AForge.Neuro.Layer.Compute(System.Double[])">
  190. <summary>
  191. Compute output vector of the layer.
  192. </summary>
  193. <param name="input">Input vector.</param>
  194. <returns>Returns layer's output vector.</returns>
  195. <remarks><para>The actual layer's output vector is determined by neurons,
  196. which comprise the layer - consists of output values of layer's neurons.
  197. The output vector is also stored in <see cref="P:AForge.Neuro.Layer.Output"/> property.</para>
  198. <para><note>The method may be called safely from multiple threads to compute layer's
  199. output value for the specified input values. However, the value of
  200. <see cref="P:AForge.Neuro.Layer.Output"/> property in multi-threaded environment is not predictable,
  201. since it may hold layer's output computed from any of the caller threads. Multi-threaded
  202. access to the method is useful in those cases when it is required to improve performance
  203. by utilizing several threads and the computation is based on the immediate return value
  204. of the method, but not on layer's output property.</note></para>
  205. </remarks>
  206. </member>
  207. <member name="M:AForge.Neuro.Layer.Randomize">
  208. <summary>
  209. Randomize neurons of the layer.
  210. </summary>
  211. <remarks>Randomizes layer's neurons by calling <see cref="M:AForge.Neuro.Neuron.Randomize"/> method
  212. of each neuron.</remarks>
  213. </member>
  214. <member name="P:AForge.Neuro.Layer.InputsCount">
  215. <summary>
  216. Layer's inputs count.
  217. </summary>
  218. </member>
  219. <member name="P:AForge.Neuro.Layer.Neurons">
  220. <summary>
  221. Layer's neurons.
  222. </summary>
  223. </member>
  224. <member name="P:AForge.Neuro.Layer.Output">
  225. <summary>
  226. Layer's output vector.
  227. </summary>
  228. <remarks><para>The calculation way of layer's output vector is determined by neurons,
  229. which comprise the layer.</para>
  230. <para><note>The property is not initialized (equals to <see langword="null"/>) until
  231. <see cref="M:AForge.Neuro.Layer.Compute(System.Double[])"/> method is called.</note></para>
  232. </remarks>
  233. </member>
  234. <member name="T:AForge.Neuro.BipolarSigmoidFunction">
  235. <summary>
  236. Bipolar sigmoid activation function.
  237. </summary>
  238. <remarks><para>The class represents bipolar sigmoid activation function with
  239. the next expression:
  240. <code lang="none">
  241. 2
  242. f(x) = ------------------ - 1
  243. 1 + exp(-alpha * x)
  244. 2 * alpha * exp(-alpha * x )
  245. f'(x) = -------------------------------- = alpha * (1 - f(x)^2) / 2
  246. (1 + exp(-alpha * x))^2
  247. </code>
  248. </para>
  249. <para>Output range of the function: <b>[-1, 1]</b>.</para>
  250. <para>Functions graph:</para>
  251. <img src="img/neuro/sigmoid_bipolar.bmp" width="242" height="172" />
  252. </remarks>
  253. </member>
  254. <member name="M:AForge.Neuro.BipolarSigmoidFunction.#ctor">
  255. <summary>
  256. Initializes a new instance of the <see cref="T:AForge.Neuro.SigmoidFunction"/> class.
  257. </summary>
  258. </member>
  259. <member name="M:AForge.Neuro.BipolarSigmoidFunction.#ctor(System.Double)">
  260. <summary>
  261. Initializes a new instance of the <see cref="T:AForge.Neuro.BipolarSigmoidFunction"/> class.
  262. </summary>
  263. <param name="alpha">Sigmoid's alpha value.</param>
  264. </member>
  265. <member name="M:AForge.Neuro.BipolarSigmoidFunction.Function(System.Double)">
  266. <summary>
  267. Calculates function value.
  268. </summary>
  269. <param name="x">Function input value.</param>
  270. <returns>Function output value, <i>f(x)</i>.</returns>
  271. <remarks>The method calculates function value at point <paramref name="x"/>.</remarks>
  272. </member>
  273. <member name="M:AForge.Neuro.BipolarSigmoidFunction.Derivative(System.Double)">
  274. <summary>
  275. Calculates function derivative.
  276. </summary>
  277. <param name="x">Function input value.</param>
  278. <returns>Function derivative, <i>f'(x)</i>.</returns>
  279. <remarks>The method calculates function derivative at point <paramref name="x"/>.</remarks>
  280. </member>
  281. <member name="M:AForge.Neuro.BipolarSigmoidFunction.Derivative2(System.Double)">
  282. <summary>
  283. Calculates function derivative.
  284. </summary>
  285. <param name="y">Function output value - the value, which was obtained
  286. with the help of <see cref="M:AForge.Neuro.BipolarSigmoidFunction.Function(System.Double)"/> method.</param>
  287. <returns>Function derivative, <i>f'(x)</i>.</returns>
  288. <remarks><para>The method calculates the same derivative value as the
  289. <see cref="M:AForge.Neuro.BipolarSigmoidFunction.Derivative(System.Double)"/> method, but it takes not the input <b>x</b> value
  290. itself, but the function value, which was calculated previously with
  291. the help of <see cref="M:AForge.Neuro.BipolarSigmoidFunction.Function(System.Double)"/> method.</para>
  292. <para><note>Some applications require as function value, as derivative value,
  293. so they can save the amount of calculations using this method to calculate derivative.</note></para>
  294. </remarks>
  295. </member>
  296. <member name="M:AForge.Neuro.BipolarSigmoidFunction.Clone">
  297. <summary>
  298. Creates a new object that is a copy of the current instance.
  299. </summary>
  300. <returns>
  301. A new object that is a copy of this instance.
  302. </returns>
  303. </member>
  304. <member name="P:AForge.Neuro.BipolarSigmoidFunction.Alpha">
  305. <summary>
  306. Sigmoid's alpha value.
  307. </summary>
  308. <remarks><para>The value determines steepness of the function. Increasing value of
  309. this property changes sigmoid to look more like a threshold function. Decreasing
  310. value of this property makes sigmoid to be very smooth (slowly growing from its
  311. minimum value to its maximum value).</para>
  312. <para>Default value is set to <b>2</b>.</para>
  313. </remarks>
  314. </member>
  315. <member name="T:AForge.Neuro.Learning.SOMLearning">
  316. <summary>
  317. Kohonen Self Organizing Map (SOM) learning algorithm.
  318. </summary>
  319. <remarks><para>This class implements Kohonen's SOM learning algorithm and
  320. is widely used in clusterization tasks. The class allows to train
  321. <see cref="T:AForge.Neuro.DistanceNetwork">Distance Networks</see>.</para>
  322. <para>Sample usage (clustering RGB colors):</para>
  323. <code>
  324. // set range for randomization neurons' weights
  325. Neuron.RandRange = new Range( 0, 255 );
  326. // create network
  327. DistanceNetwork network = new DistanceNetwork(
  328. 3, // thress inputs in the network
  329. 100 * 100 ); // 10000 neurons
  330. // create learning algorithm
  331. SOMLearning trainer = new SOMLearning( network );
  332. // network's input
  333. double[] input = new double[3];
  334. // loop
  335. while ( !needToStop )
  336. {
  337. input[0] = rand.Next( 256 );
  338. input[1] = rand.Next( 256 );
  339. input[2] = rand.Next( 256 );
  340. trainer.Run( input );
  341. // ...
  342. // update learning rate and radius continuously,
  343. // so networks may come steady state
  344. }
  345. </code>
  346. </remarks>
  347. </member>
  348. <member name="M:AForge.Neuro.Learning.SOMLearning.#ctor(AForge.Neuro.DistanceNetwork)">
  349. <summary>
  350. Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.SOMLearning"/> class.
  351. </summary>
  352. <param name="network">Neural network to train.</param>
  353. <remarks><para>This constructor supposes that a square network will be passed for training -
  354. it should be possible to get square root of network's neurons amount.</para></remarks>
  355. <exception cref="T:System.ArgumentException">Invalid network size - square network is expected.</exception>
  356. </member>
  357. <member name="M:AForge.Neuro.Learning.SOMLearning.#ctor(AForge.Neuro.DistanceNetwork,System.Int32,System.Int32)">
  358. <summary>
  359. Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.SOMLearning"/> class.
  360. </summary>
  361. <param name="network">Neural network to train.</param>
  362. <param name="width">Neural network's width.</param>
  363. <param name="height">Neural network's height.</param>
  364. <remarks>The constructor allows to pass network of arbitrary rectangular shape.
  365. The amount of neurons in the network should be equal to <b>width</b> * <b>height</b>.
  366. </remarks>
  367. <exception cref="T:System.ArgumentException">Invalid network size - network size does not correspond
  368. to specified width and height.</exception>
  369. </member>
  370. <member name="M:AForge.Neuro.Learning.SOMLearning.Run(System.Double[])">
  371. <summary>
  372. Runs learning iteration.
  373. </summary>
  374. <param name="input">Input vector.</param>
  375. <returns>Returns learning error - summary absolute difference between neurons' weights
  376. and appropriate inputs. The difference is measured according to the neurons
  377. distance to the winner neuron.</returns>
  378. <remarks><para>The method runs one learning iterations - finds winner neuron (the neuron
  379. which has weights with values closest to the specified input vector) and updates its weight
  380. (as well as weights of neighbor neurons) in the way to decrease difference with the specified
  381. input vector.</para></remarks>
  382. </member>
  383. <member name="M:AForge.Neuro.Learning.SOMLearning.RunEpoch(System.Double[][])">
  384. <summary>
  385. Runs learning epoch.
  386. </summary>
  387. <param name="input">Array of input vectors.</param>
  388. <returns>Returns summary learning error for the epoch. See <see cref="M:AForge.Neuro.Learning.SOMLearning.Run(System.Double[])"/>
  389. method for details about learning error calculation.</returns>
  390. <remarks><para>The method runs one learning epoch, by calling <see cref="M:AForge.Neuro.Learning.SOMLearning.Run(System.Double[])"/> method
  391. for each vector provided in the <paramref name="input"/> array.</para></remarks>
  392. </member>
  393. <member name="P:AForge.Neuro.Learning.SOMLearning.LearningRate">
  394. <summary>
  395. Learning rate, [0, 1].
  396. </summary>
  397. <remarks><para>Determines speed of learning.</para>
  398. <para>Default value equals to <b>0.1</b>.</para>
  399. </remarks>
  400. </member>
  401. <member name="P:AForge.Neuro.Learning.SOMLearning.LearningRadius">
  402. <summary>
  403. Learning radius.
  404. </summary>
  405. <remarks><para>Determines the amount of neurons to be updated around
  406. winner neuron. Neurons, which are in the circle of specified radius,
  407. are updated during the learning procedure. Neurons, which are closer
  408. to the winner neuron, get more update.</para>
  409. <para><note>In the case if learning rate is set to 0, then only winner
  410. neuron's weights are updated.</note></para>
  411. <para>Default value equals to <b>7</b>.</para>
  412. </remarks>
  413. </member>
  414. <member name="T:AForge.Neuro.Learning.BackPropagationLearning">
  415. <summary>
  416. Back propagation learning algorithm.
  417. </summary>
  418. <remarks><para>The class implements back propagation learning algorithm,
  419. which is widely used for training multi-layer neural networks with
  420. continuous activation functions.</para>
  421. <para>Sample usage (training network to calculate XOR function):</para>
  422. <code>
  423. // initialize input and output values
  424. double[][] input = new double[4][] {
  425. new double[] {0, 0}, new double[] {0, 1},
  426. new double[] {1, 0}, new double[] {1, 1}
  427. };
  428. double[][] output = new double[4][] {
  429. new double[] {0}, new double[] {1},
  430. new double[] {1}, new double[] {0}
  431. };
  432. // create neural network
  433. ActivationNetwork network = new ActivationNetwork(
  434. SigmoidFunction( 2 ),
  435. 2, // two inputs in the network
  436. 2, // two neurons in the first layer
  437. 1 ); // one neuron in the second layer
  438. // create teacher
  439. BackPropagationLearning teacher = new BackPropagationLearning( network );
  440. // loop
  441. while ( !needToStop )
  442. {
  443. // run epoch of learning procedure
  444. double error = teacher.RunEpoch( input, output );
  445. // check error value to see if we need to stop
  446. // ...
  447. }
  448. </code>
  449. </remarks>
  450. <seealso cref="T:AForge.Neuro.Learning.EvolutionaryLearning"/>
  451. </member>
  452. <member name="T:AForge.Neuro.Learning.ISupervisedLearning">
  453. <summary>
  454. Supervised learning interface.
  455. </summary>
  456. <remarks><para>The interface describes methods, which should be implemented
  457. by all supervised learning algorithms. Supervised learning is such
  458. type of learning algorithms, where system's desired output is known on
  459. the learning stage. So, given sample input values and desired outputs,
  460. system should adopt its internals to produce correct (or close to correct)
  461. result after the learning step is complete.</para></remarks>
  462. </member>
  463. <member name="M:AForge.Neuro.Learning.ISupervisedLearning.Run(System.Double[],System.Double[])">
  464. <summary>
  465. Runs learning iteration.
  466. </summary>
  467. <param name="input">Input vector.</param>
  468. <param name="output">Desired output vector.</param>
  469. <returns>Returns learning error.</returns>
  470. </member>
  471. <member name="M:AForge.Neuro.Learning.ISupervisedLearning.RunEpoch(System.Double[][],System.Double[][])">
  472. <summary>
  473. Runs learning epoch.
  474. </summary>
  475. <param name="input">Array of input vectors.</param>
  476. <param name="output">Array of output vectors.</param>
  477. <returns>Returns sum of learning errors.</returns>
  478. </member>
  479. <member name="M:AForge.Neuro.Learning.BackPropagationLearning.#ctor(AForge.Neuro.ActivationNetwork)">
  480. <summary>
  481. Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.BackPropagationLearning"/> class.
  482. </summary>
  483. <param name="network">Network to teach.</param>
  484. </member>
  485. <member name="M:AForge.Neuro.Learning.BackPropagationLearning.Run(System.Double[],System.Double[])">
  486. <summary>
  487. Runs learning iteration.
  488. </summary>
  489. <param name="input">Input vector.</param>
  490. <param name="output">Desired output vector.</param>
  491. <returns>Returns squared error (difference between current network's output and
  492. desired output) divided by 2.</returns>
  493. <remarks><para>Runs one learning iteration and updates neuron's
  494. weights.</para></remarks>
  495. </member>
  496. <member name="M:AForge.Neuro.Learning.BackPropagationLearning.RunEpoch(System.Double[][],System.Double[][])">
  497. <summary>
  498. Runs learning epoch.
  499. </summary>
  500. <param name="input">Array of input vectors.</param>
  501. <param name="output">Array of output vectors.</param>
  502. <returns>Returns summary learning error for the epoch. See <see cref="M:AForge.Neuro.Learning.BackPropagationLearning.Run(System.Double[],System.Double[])"/>
  503. method for details about learning error calculation.</returns>
  504. <remarks><para>The method runs one learning epoch, by calling <see cref="M:AForge.Neuro.Learning.BackPropagationLearning.Run(System.Double[],System.Double[])"/> method
  505. for each vector provided in the <paramref name="input"/> array.</para></remarks>
  506. </member>
  507. <member name="M:AForge.Neuro.Learning.BackPropagationLearning.CalculateError(System.Double[])">
  508. <summary>
  509. Calculates error values for all neurons of the network.
  510. </summary>
  511. <param name="desiredOutput">Desired output vector.</param>
  512. <returns>Returns summary squared error of the last layer divided by 2.</returns>
  513. </member>
  514. <member name="M:AForge.Neuro.Learning.BackPropagationLearning.CalculateUpdates(System.Double[])">
  515. <summary>
  516. Calculate weights updates.
  517. </summary>
  518. <param name="input">Network's input vector.</param>
  519. </member>
  520. <member name="M:AForge.Neuro.Learning.BackPropagationLearning.UpdateNetwork">
  521. <summary>
  522. Update network'sweights.
  523. </summary>
  524. </member>
  525. <member name="P:AForge.Neuro.Learning.BackPropagationLearning.LearningRate">
  526. <summary>
  527. Learning rate, [0, 1].
  528. </summary>
  529. <remarks><para>The value determines speed of learning.</para>
  530. <para>Default value equals to <b>0.1</b>.</para>
  531. </remarks>
  532. </member>
  533. <member name="P:AForge.Neuro.Learning.BackPropagationLearning.Momentum">
  534. <summary>
  535. Momentum, [0, 1].
  536. </summary>
  537. <remarks><para>The value determines the portion of previous weight's update
  538. to use on current iteration. Weight's update values are calculated on
  539. each iteration depending on neuron's error. The momentum specifies the amount
  540. of update to use from previous iteration and the amount of update
  541. to use from current iteration. If the value is equal to 0.1, for example,
  542. then 0.1 portion of previous update and 0.9 portion of current update are used
  543. to update weight's value.</para>
  544. <para>Default value equals to <b>0.0</b>.</para>
  545. </remarks>
  546. </member>
  547. <member name="T:AForge.Neuro.Learning.PerceptronLearning">
  548. <summary>
  549. Perceptron learning algorithm.
  550. </summary>
  551. <remarks><para>This learning algorithm is used to train one layer neural
  552. network of <see cref="T:AForge.Neuro.ActivationNeuron">Activation Neurons</see>
  553. with the <see cref="T:AForge.Neuro.ThresholdFunction">Threshold</see>
  554. activation function.</para>
  555. <para>See information about <a href="http://en.wikipedia.org/wiki/Perceptron">Perceptron</a>
  556. and its learning algorithm.</para>
  557. </remarks>
  558. </member>
  559. <member name="M:AForge.Neuro.Learning.PerceptronLearning.#ctor(AForge.Neuro.ActivationNetwork)">
  560. <summary>
  561. Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.PerceptronLearning"/> class.
  562. </summary>
  563. <param name="network">Network to teach.</param>
  564. <exception cref="T:System.ArgumentException">Invalid nuaral network. It should have one layer only.</exception>
  565. </member>
  566. <member name="M:AForge.Neuro.Learning.PerceptronLearning.Run(System.Double[],System.Double[])">
  567. <summary>
  568. Runs learning iteration.
  569. </summary>
  570. <param name="input">Input vector.</param>
  571. <param name="output">Desired output vector.</param>
  572. <returns>Returns absolute error - difference between current network's output and
  573. desired output.</returns>
  574. <remarks><para>Runs one learning iteration and updates neuron's
  575. weights in the case if neuron's output is not equal to the
  576. desired output.</para></remarks>
  577. </member>
  578. <member name="M:AForge.Neuro.Learning.PerceptronLearning.RunEpoch(System.Double[][],System.Double[][])">
  579. <summary>
  580. Runs learning epoch.
  581. </summary>
  582. <param name="input">Array of input vectors.</param>
  583. <param name="output">Array of output vectors.</param>
  584. <returns>Returns summary learning error for the epoch. See <see cref="M:AForge.Neuro.Learning.PerceptronLearning.Run(System.Double[],System.Double[])"/>
  585. method for details about learning error calculation.</returns>
  586. <remarks><para>The method runs one learning epoch, by calling <see cref="M:AForge.Neuro.Learning.PerceptronLearning.Run(System.Double[],System.Double[])"/> method
  587. for each vector provided in the <paramref name="input"/> array.</para></remarks>
  588. </member>
  589. <member name="P:AForge.Neuro.Learning.PerceptronLearning.LearningRate">
  590. <summary>
  591. Learning rate, [0, 1].
  592. </summary>
  593. <remarks><para>The value determines speed of learning.</para>
  594. <para>Default value equals to <b>0.1</b>.</para>
  595. </remarks>
  596. </member>
  597. <member name="T:AForge.Neuro.ThresholdFunction">
  598. <summary>
  599. Threshold activation function.
  600. </summary>
  601. <remarks><para>The class represents threshold activation function with
  602. the next expression:
  603. <code lang="none">
  604. f(x) = 1, if x >= 0, otherwise 0
  605. </code>
  606. </para>
  607. <para>Output range of the function: <b>[0, 1]</b>.</para>
  608. <para>Functions graph:</para>
  609. <img src="img/neuro/threshold.bmp" width="242" height="172" />
  610. </remarks>
  611. </member>
  612. <member name="M:AForge.Neuro.ThresholdFunction.#ctor">
  613. <summary>
  614. Initializes a new instance of the <see cref="T:AForge.Neuro.ThresholdFunction"/> class.
  615. </summary>
  616. </member>
  617. <member name="M:AForge.Neuro.ThresholdFunction.Function(System.Double)">
  618. <summary>
  619. Calculates function value.
  620. </summary>
  621. <param name="x">Function input value.</param>
  622. <returns>Function output value, <i>f(x)</i>.</returns>
  623. <remarks>The method calculates function value at point <paramref name="x"/>.</remarks>
  624. </member>
  625. <member name="M:AForge.Neuro.ThresholdFunction.Derivative(System.Double)">
  626. <summary>
  627. Calculates function derivative (not supported).
  628. </summary>
  629. <param name="x">Input value.</param>
  630. <returns>Always returns 0.</returns>
  631. <remarks><para><note>The method is not supported, because it is not possible to
  632. calculate derivative of the function.</note></para></remarks>
  633. </member>
  634. <member name="M:AForge.Neuro.ThresholdFunction.Derivative2(System.Double)">
  635. <summary>
  636. Calculates function derivative (not supported).
  637. </summary>
  638. <param name="y">Input value.</param>
  639. <returns>Always returns 0.</returns>
  640. <remarks><para><note>The method is not supported, because it is not possible to
  641. calculate derivative of the function.</note></para></remarks>
  642. </member>
  643. <member name="M:AForge.Neuro.ThresholdFunction.Clone">
  644. <summary>
  645. Creates a new object that is a copy of the current instance.
  646. </summary>
  647. <returns>
  648. A new object that is a copy of this instance.
  649. </returns>
  650. </member>
  651. <member name="T:AForge.Neuro.Network">
  652. <summary>
  653. Base neural network class.
  654. </summary>
  655. <remarks>This is a base neural netwok class, which represents
  656. collection of neuron's layers.</remarks>
  657. </member>
  658. <member name="F:AForge.Neuro.Network.inputsCount">
  659. <summary>
  660. Network's inputs count.
  661. </summary>
  662. </member>
  663. <member name="F:AForge.Neuro.Network.layersCount">
  664. <summary>
  665. Network's layers count.
  666. </summary>
  667. </member>
  668. <member name="F:AForge.Neuro.Network.layers">
  669. <summary>
  670. Network's layers.
  671. </summary>
  672. </member>
  673. <member name="F:AForge.Neuro.Network.output">
  674. <summary>
  675. Network's output vector.
  676. </summary>
  677. </member>
  678. <member name="M:AForge.Neuro.Network.#ctor(System.Int32,System.Int32)">
  679. <summary>
  680. Initializes a new instance of the <see cref="T:AForge.Neuro.Network"/> class.
  681. </summary>
  682. <param name="inputsCount">Network's inputs count.</param>
  683. <param name="layersCount">Network's layers count.</param>
  684. <remarks>Protected constructor, which initializes <see cref="F:AForge.Neuro.Network.inputsCount"/>,
  685. <see cref="F:AForge.Neuro.Network.layersCount"/> and <see cref="F:AForge.Neuro.Network.layers"/> members.</remarks>
  686. </member>
  687. <member name="M:AForge.Neuro.Network.Compute(System.Double[])">
  688. <summary>
  689. Compute output vector of the network.
  690. </summary>
  691. <param name="input">Input vector.</param>
  692. <returns>Returns network's output vector.</returns>
  693. <remarks><para>The actual network's output vecor is determined by layers,
  694. which comprise the layer - represents an output vector of the last layer
  695. of the network. The output vector is also stored in <see cref="P:AForge.Neuro.Network.Output"/> property.</para>
  696. <para><note>The method may be called safely from multiple threads to compute network's
  697. output value for the specified input values. However, the value of
  698. <see cref="P:AForge.Neuro.Network.Output"/> property in multi-threaded environment is not predictable,
  699. since it may hold network's output computed from any of the caller threads. Multi-threaded
  700. access to the method is useful in those cases when it is required to improve performance
  701. by utilizing several threads and the computation is based on the immediate return value
  702. of the method, but not on network's output property.</note></para>
  703. </remarks>
  704. </member>
  705. <member name="M:AForge.Neuro.Network.Randomize">
  706. <summary>
  707. Randomize layers of the network.
  708. </summary>
  709. <remarks>Randomizes network's layers by calling <see cref="M:AForge.Neuro.Layer.Randomize"/> method
  710. of each layer.</remarks>
  711. </member>
  712. <member name="M:AForge.Neuro.Network.Save(System.String)">
  713. <summary>
  714. Save network to specified file.
  715. </summary>
  716. <param name="fileName">File name to save network into.</param>
  717. <remarks><para>The neural network is saved using .NET serialization (binary formatter is used).</para></remarks>
  718. </member>
  719. <member name="M:AForge.Neuro.Network.Save(System.IO.Stream)">
  720. <summary>
  721. Save network to specified file.
  722. </summary>
  723. <param name="stream">Stream to save network into.</param>
  724. <remarks><para>The neural network is saved using .NET serialization (binary formatter is used).</para></remarks>
  725. </member>
  726. <member name="M:AForge.Neuro.Network.Load(System.String)">
  727. <summary>
  728. Load network from specified file.
  729. </summary>
  730. <param name="fileName">File name to load network from.</param>
  731. <returns>Returns instance of <see cref="T:AForge.Neuro.Network"/> class with all properties initialized from file.</returns>
  732. <remarks><para>Neural network is loaded from file using .NET serialization (binary formater is used).</para></remarks>
  733. </member>
  734. <member name="M:AForge.Neuro.Network.Load(System.IO.Stream)">
  735. <summary>
  736. Load network from specified file.
  737. </summary>
  738. <param name="stream">Stream to load network from.</param>
  739. <returns>Returns instance of <see cref="T:AForge.Neuro.Network"/> class with all properties initialized from file.</returns>
  740. <remarks><para>Neural network is loaded from file using .NET serialization (binary formater is used).</para></remarks>
  741. </member>
  742. <member name="P:AForge.Neuro.Network.InputsCount">
  743. <summary>
  744. Network's inputs count.
  745. </summary>
  746. </member>
  747. <member name="P:AForge.Neuro.Network.Layers">
  748. <summary>
  749. Network's layers.
  750. </summary>
  751. </member>
  752. <member name="P:AForge.Neuro.Network.Output">
  753. <summary>
  754. Network's output vector.
  755. </summary>
  756. <remarks><para>The calculation way of network's output vector is determined by
  757. layers, which comprise the network.</para>
  758. <para><note>The property is not initialized (equals to <see langword="null"/>) until
  759. <see cref="M:AForge.Neuro.Network.Compute(System.Double[])"/> method is called.</note></para>
  760. </remarks>
  761. </member>
  762. <member name="T:AForge.Neuro.DistanceNetwork">
  763. <summary>
  764. Distance network.
  765. </summary>
  766. <remarks>Distance network is a neural network of only one <see cref="T:AForge.Neuro.DistanceLayer">distance
  767. layer</see>. The network is a base for such neural networks as SOM, Elastic net, etc.
  768. </remarks>
  769. </member>
  770. <member name="M:AForge.Neuro.DistanceNetwork.#ctor(System.Int32,System.Int32)">
  771. <summary>
  772. Initializes a new instance of the <see cref="T:AForge.Neuro.DistanceNetwork"/> class.
  773. </summary>
  774. <param name="inputsCount">Network's inputs count.</param>
  775. <param name="neuronsCount">Network's neurons count.</param>
  776. <remarks>The new network is randomized (see <see cref="M:AForge.Neuro.Neuron.Randomize"/>
  777. method) after it is created.</remarks>
  778. </member>
  779. <member name="M:AForge.Neuro.DistanceNetwork.GetWinner">
  780. <summary>
  781. Get winner neuron.
  782. </summary>
  783. <returns>Index of the winner neuron.</returns>
  784. <remarks>The method returns index of the neuron, which weights have
  785. the minimum distance from network's input.</remarks>
  786. </member>
  787. <member name="T:AForge.Neuro.Learning.EvolutionaryFitness">
  788. <summary>
  789. Fitness function used for chromosomes representing collection of neural network's weights.
  790. </summary>
  791. </member>
  792. <member name="M:AForge.Neuro.Learning.EvolutionaryFitness.#ctor(AForge.Neuro.ActivationNetwork,System.Double[][],System.Double[][])">
  793. <summary>
  794. Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.EvolutionaryFitness"/> class.
  795. </summary>
  796. <param name="network">Neural network for which fitness will be calculated.</param>
  797. <param name="input">Input data samples for neural network.</param>
  798. <param name="output">Output data sampels for neural network (desired output).</param>
  799. <exception cref="T:System.ArgumentException">Length of inputs and outputs arrays must be equal and greater than 0.</exception>
  800. <exception cref="T:System.ArgumentException">Length of each input vector must be equal to neural network's inputs count.</exception>
  801. </member>
  802. <member name="M:AForge.Neuro.Learning.EvolutionaryFitness.Evaluate(AForge.Genetic.IChromosome)">
  803. <summary>
  804. Evaluates chromosome.
  805. </summary>
  806. <param name="chromosome">Chromosome to evaluate.</param>
  807. <returns>Returns chromosome's fitness value.</returns>
  808. <remarks>The method calculates fitness value of the specified
  809. chromosome.</remarks>
  810. </member>
  811. <member name="T:AForge.Neuro.Learning.DeltaRuleLearning">
  812. <summary>
  813. Delta rule learning algorithm.
  814. </summary>
  815. <remarks><para>This learning algorithm is used to train one layer neural
  816. network of <see cref="T:AForge.Neuro.ActivationNeuron">Activation Neurons</see>
  817. with continuous activation function, see <see cref="T:AForge.Neuro.SigmoidFunction"/>
  818. for example.</para>
  819. <para>See information about <a href="http://en.wikipedia.org/wiki/Delta_rule">delta rule</a>
  820. learning algorithm.</para>
  821. </remarks>
  822. </member>
  823. <member name="M:AForge.Neuro.Learning.DeltaRuleLearning.#ctor(AForge.Neuro.ActivationNetwork)">
  824. <summary>
  825. Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.DeltaRuleLearning"/> class.
  826. </summary>
  827. <param name="network">Network to teach.</param>
  828. <exception cref="T:System.ArgumentException">Invalid nuaral network. It should have one layer only.</exception>
  829. </member>
  830. <member name="M:AForge.Neuro.Learning.DeltaRuleLearning.Run(System.Double[],System.Double[])">
  831. <summary>
  832. Runs learning iteration.
  833. </summary>
  834. <param name="input">Input vector.</param>
  835. <param name="output">Desired output vector.</param>
  836. <returns>Returns squared error (difference between current network's output and
  837. desired output) divided by 2.</returns>
  838. <remarks><para>Runs one learning iteration and updates neuron's
  839. weights.</para></remarks>
  840. </member>
  841. <member name="M:AForge.Neuro.Learning.DeltaRuleLearning.RunEpoch(System.Double[][],System.Double[][])">
  842. <summary>
  843. Runs learning epoch.
  844. </summary>
  845. <param name="input">Array of input vectors.</param>
  846. <param name="output">Array of output vectors.</param>
  847. <returns>Returns summary learning error for the epoch. See <see cref="M:AForge.Neuro.Learning.DeltaRuleLearning.Run(System.Double[],System.Double[])"/>
  848. method for details about learning error calculation.</returns>
  849. <remarks><para>The method runs one learning epoch, by calling <see cref="M:AForge.Neuro.Learning.DeltaRuleLearning.Run(System.Double[],System.Double[])"/> method
  850. for each vector provided in the <paramref name="input"/> array.</para></remarks>
  851. </member>
  852. <member name="P:AForge.Neuro.Learning.DeltaRuleLearning.LearningRate">
  853. <summary>
  854. Learning rate, [0, 1].
  855. </summary>
  856. <remarks><para>The value determines speed of learning.</para>
  857. <para>Default value equals to <b>0.1</b>.</para>
  858. </remarks>
  859. </member>
  860. <member name="T:AForge.Neuro.Learning.ElasticNetworkLearning">
  861. <summary>
  862. Elastic network learning algorithm.
  863. </summary>
  864. <remarks><para>This class implements elastic network's learning algorithm and
  865. allows to train <see cref="T:AForge.Neuro.DistanceNetwork">Distance Networks</see>.</para>
  866. </remarks>
  867. </member>
  868. <member name="M:AForge.Neuro.Learning.ElasticNetworkLearning.#ctor(AForge.Neuro.DistanceNetwork)">
  869. <summary>
  870. Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.ElasticNetworkLearning"/> class.
  871. </summary>
  872. <param name="network">Neural network to train.</param>
  873. </member>
  874. <member name="M:AForge.Neuro.Learning.ElasticNetworkLearning.Run(System.Double[])">
  875. <summary>
  876. Runs learning iteration.
  877. </summary>
  878. <param name="input">Input vector.</param>
  879. <returns>Returns learning error - summary absolute difference between neurons'
  880. weights and appropriate inputs. The difference is measured according to the neurons
  881. distance to the winner neuron.</returns>
  882. <remarks><para>The method runs one learning iterations - finds winner neuron (the neuron
  883. which has weights with values closest to the specified input vector) and updates its weight
  884. (as well as weights of neighbor neurons) in the way to decrease difference with the specified
  885. input vector.</para></remarks>
  886. </member>
  887. <member name="M:AForge.Neuro.Learning.ElasticNetworkLearning.RunEpoch(System.Double[][])">
  888. <summary>
  889. Runs learning epoch.
  890. </summary>
  891. <param name="input">Array of input vectors.</param>
  892. <returns>Returns summary learning error for the epoch. See <see cref="M:AForge.Neuro.Learning.ElasticNetworkLearning.Run(System.Double[])"/>
  893. method for details about learning error calculation.</returns>
  894. <remarks><para>The method runs one learning epoch, by calling <see cref="M:AForge.Neuro.Learning.ElasticNetworkLearning.Run(System.Double[])"/> method
  895. for each vector provided in the <paramref name="input"/> array.</para></remarks>
  896. </member>
  897. <member name="P:AForge.Neuro.Learning.ElasticNetworkLearning.LearningRate">
  898. <summary>
  899. Learning rate, [0, 1].
  900. </summary>
  901. <remarks><para>Determines speed of learning.</para>
  902. <para>Default value equals to <b>0.1</b>.</para>
  903. </remarks>
  904. </member>
  905. <member name="P:AForge.Neuro.Learning.ElasticNetworkLearning.LearningRadius">
  906. <summary>
  907. Learning radius, [0, 1].
  908. </summary>
  909. <remarks><para>Determines the amount of neurons to be updated around
  910. winner neuron. Neurons, which are in the circle of specified radius,
  911. are updated during the learning procedure. Neurons, which are closer
  912. to the winner neuron, get more update.</para>
  913. <para>Default value equals to <b>0.5</b>.</para>
  914. </remarks>
  915. </member>
  916. <member name="T:AForge.Neuro.Neuron">
  917. <summary>
  918. Base neuron class.
  919. </summary>
  920. <remarks>This is a base neuron class, which encapsulates such
  921. common properties, like neuron's input, output and weights.</remarks>
  922. </member>
  923. <member name="F:AForge.Neuro.Neuron.inputsCount">
  924. <summary>
  925. Neuron's inputs count.
  926. </summary>
  927. </member>
  928. <member name="F:AForge.Neuro.Neuron.weights">
  929. <summary>
  930. Nouron's wieghts.
  931. </summary>
  932. </member>
  933. <member name="F:AForge.Neuro.Neuron.output">
  934. <summary>
  935. Neuron's output value.
  936. </summary>
  937. </member>
  938. <member name="F:AForge.Neuro.Neuron.rand">
  939. <summary>
  940. Random number generator.
  941. </summary>
  942. <remarks>The generator is used for neuron's weights randomization.</remarks>
  943. </member>
  944. <member name="F:AForge.Neuro.Neuron.randRange">
  945. <summary>
  946. Random generator range.
  947. </summary>
  948. <remarks>Sets the range of random generator. Affects initial values of neuron's weight.
  949. Default value is [0, 1].</remarks>
  950. </member>
  951. <member name="M:AForge.Neuro.Neuron.#ctor(System.Int32)">
  952. <summary>
  953. Initializes a new instance of the <see cref="T:AForge.Neuro.Neuron"/> class.
  954. </summary>
  955. <param name="inputs">Neuron's inputs count.</param>
  956. <remarks>The new neuron will be randomized (see <see cref="M:AForge.Neuro.Neuron.Randomize"/> method)
  957. after it is created.</remarks>
  958. </member>
  959. <member name="M:AForge.Neuro.Neuron.Randomize">
  960. <summary>
  961. Randomize neuron.
  962. </summary>
  963. <remarks>Initialize neuron's weights with random values within the range specified
  964. by <see cref="P:AForge.Neuro.Neuron.RandRange"/>.</remarks>
  965. </member>
  966. <member name="M:AForge.Neuro.Neuron.Compute(System.Double[])">
  967. <summary>
  968. Computes output value of neuron.
  969. </summary>
  970. <param name="input">Input vector.</param>
  971. <returns>Returns neuron's output value.</returns>
  972. <remarks>The actual neuron's output value is determined by inherited class.
  973. The output value is also stored in <see cref="P:AForge.Neuro.Neuron.Output"/> property.</remarks>
  974. </member>
  975. <member name="P:AForge.Neuro.Neuron.RandGenerator">
  976. <summary>
  977. Random number generator.
  978. </summary>
  979. <remarks>The property allows to initialize random generator with a custom seed. The generator is
  980. used for neuron's weights randomization.</remarks>
  981. </member>
  982. <member name="P:AForge.Neuro.Neuron.RandRange">
  983. <summary>
  984. Random generator range.
  985. </summary>
  986. <remarks>Sets the range of random generator. Affects initial values of neuron's weight.
  987. Default value is [0, 1].</remarks>
  988. </member>
  989. <member name="P:AForge.Neuro.Neuron.InputsCount">
  990. <summary>
  991. Neuron's inputs count.
  992. </summary>
  993. </member>
  994. <member name="P:AForge.Neuro.Neuron.Output">
  995. <summary>
  996. Neuron's output value.
  997. </summary>
  998. <remarks>The calculation way of neuron's output value is determined by inherited class.</remarks>
  999. </member>
  1000. <member name="P:AForge.Neuro.Neuron.Weights">
  1001. <summary>
  1002. Neuron's weights.
  1003. </summary>
  1004. </member>
  1005. <member name="T:AForge.Neuro.ActivationLayer">
  1006. <summary>
  1007. Activation layer.
  1008. </summary>
  1009. <remarks>Activation layer is a layer of <see cref="T:AForge.Neuro.ActivationNeuron">activation neurons</see>.
  1010. The layer is usually used in multi-layer neural networks.</remarks>
  1011. </member>
  1012. <member name="M:AForge.Neuro.ActivationLayer.#ctor(System.Int32,System.Int32,AForge.Neuro.IActivationFunction)">
  1013. <summary>
  1014. Initializes a new instance of the <see cref="T:AForge.Neuro.ActivationLayer"/> class.
  1015. </summary>
  1016. <param name="neuronsCount">Layer's neurons count.</param>
  1017. <param name="inputsCount">Layer's inputs count.</param>
  1018. <param name="function">Activation function of neurons of the layer.</param>
  1019. <remarks>The new layer is randomized (see <see cref="M:AForge.Neuro.ActivationNeuron.Randomize"/>
  1020. method) after it is created.</remarks>
  1021. </member>
  1022. <member name="M:AForge.Neuro.ActivationLayer.SetActivationFunction(AForge.Neuro.IActivationFunction)">
  1023. <summary>
  1024. Set new activation function for all neurons of the layer.
  1025. </summary>
  1026. <param name="function">Activation function to set.</param>
  1027. <remarks><para>The methods sets new activation function for each neuron by setting
  1028. their <see cref="P:AForge.Neuro.ActivationNeuron.ActivationFunction"/> property.</para></remarks>
  1029. </member>
  1030. <member name="T:AForge.Neuro.ActivationNeuron">
  1031. <summary>
  1032. Activation neuron.
  1033. </summary>
  1034. <remarks><para>Activation neuron computes weighted sum of its inputs, adds
  1035. threshold value and then applies <see cref="P:AForge.Neuro.ActivationNeuron.ActivationFunction">activation function</see>.
  1036. The neuron isusually used in multi-layer neural networks.</para></remarks>
  1037. <seealso cref="T:AForge.Neuro.IActivationFunction"/>
  1038. </member>
  1039. <member name="F:AForge.Neuro.ActivationNeuron.threshold">
  1040. <summary>
  1041. Threshold value.
  1042. </summary>
  1043. <remarks>The value is added to inputs weighted sum before it is passed to activation
  1044. function.</remarks>
  1045. </member>
  1046. <member name="F:AForge.Neuro.ActivationNeuron.function">
  1047. <summary>
  1048. Activation function.
  1049. </summary>
  1050. <remarks>The function is applied to inputs weighted sum plus
  1051. threshold value.</remarks>
  1052. </member>
  1053. <member name="M:AForge.Neuro.ActivationNeuron.#ctor(System.Int32,AForge.Neuro.IActivationFunction)">
  1054. <summary>
  1055. Initializes a new instance of the <see cref="T:AForge.Neuro.ActivationNeuron"/> class.
  1056. </summary>
  1057. <param name="inputs">Neuron's inputs count.</param>
  1058. <param name="function">Neuron's activation function.</param>
  1059. </member>
  1060. <member name="M:AForge.Neuro.ActivationNeuron.Randomize">
  1061. <summary>
  1062. Randomize neuron.
  1063. </summary>
  1064. <remarks>Calls base class <see cref="M:AForge.Neuro.Neuron.Randomize">Randomize</see> method
  1065. to randomize neuron's weights and then randomizes threshold's value.</remarks>
  1066. </member>
  1067. <member name="M:AForge.Neuro.ActivationNeuron.Compute(System.Double[])">
  1068. <summary>
  1069. Computes output value of neuron.
  1070. </summary>
  1071. <param name="input">Input vector.</param>
  1072. <returns>Returns neuron's output value.</returns>
  1073. <remarks><para>The output value of activation neuron is equal to value
  1074. of nueron's activation function, which parameter is weighted sum
  1075. of its inputs plus threshold value. The output value is also stored
  1076. in <see cref="P:AForge.Neuro.Neuron.Output">Output</see> property.</para>
  1077. <para><note>The method may be called safely from multiple threads to compute neuron's
  1078. output value for the specified input values. However, the value of
  1079. <see cref="P:AForge.Neuro.Neuron.Output"/> property in multi-threaded environment is not predictable,
  1080. since it may hold neuron's output computed from any of the caller threads. Multi-threaded
  1081. access to the method is useful in those cases when it is required to improve performance
  1082. by utilizing several threads and the computation is based on the immediate return value
  1083. of the method, but not on neuron's output property.</note></para>
  1084. </remarks>
  1085. <exception cref="T:System.ArgumentException">Wrong length of the input vector, which is not
  1086. equal to the <see cref="P:AForge.Neuro.Neuron.InputsCount">expected value</see>.</exception>
  1087. </member>
  1088. <member name="P:AForge.Neuro.ActivationNeuron.Threshold">
  1089. <summary>
  1090. Threshold value.
  1091. </summary>
  1092. <remarks>The value is added to inputs weighted sum before it is passed to activation
  1093. function.</remarks>
  1094. </member>
  1095. <member name="P:AForge.Neuro.ActivationNeuron.ActivationFunction">
  1096. <summary>
  1097. Neuron's activation function.
  1098. </summary>
  1099. </member>
  1100. <member name="T:AForge.Neuro.ActivationNetwork">
  1101. <summary>
  1102. Activation network.
  1103. </summary>
  1104. <remarks><para>Activation network is a base for multi-layer neural network
  1105. with activation functions. It consists of <see cref="T:AForge.Neuro.ActivationLayer">activation
  1106. layers</see>.</para>
  1107. <para>Sample usage:</para>
  1108. <code>
  1109. // create activation network
  1110. ActivationNetwork network = new ActivationNetwork(
  1111. new SigmoidFunction( ), // sigmoid activation function
  1112. 3, // 3 inputs
  1113. 4, 1 ); // 2 layers:
  1114. // 4 neurons in the firs layer
  1115. // 1 neuron in the second layer
  1116. </code>
  1117. </remarks>
  1118. </member>
  1119. <member name="M:AForge.Neuro.ActivationNetwork.#ctor(AForge.Neuro.IActivationFunction,System.Int32,System.Int32[])">
  1120. <summary>
  1121. Initializes a new instance of the <see cref="T:AForge.Neuro.ActivationNetwork"/> class.
  1122. </summary>
  1123. <param name="function">Activation function of neurons of the network.</param>
  1124. <param name="inputsCount">Network's inputs count.</param>
  1125. <param name="neuronsCount">Array, which specifies the amount of neurons in
  1126. each layer of the neural network.</param>
  1127. <remarks>The new network is randomized (see <see cref="M:AForge.Neuro.ActivationNeuron.Randomize"/>
  1128. method) after it is created.</remarks>
  1129. </member>
  1130. <member name="M:AForge.Neuro.ActivationNetwork.SetActivationFunction(AForge.Neuro.IActivationFunction)">
  1131. <summary>
  1132. Set new activation function for all neurons of the network.
  1133. </summary>
  1134. <param name="function">Activation function to set.</param>
  1135. <remarks><para>The method sets new activation function for all neurons by calling
  1136. <see cref="M:AForge.Neuro.ActivationLayer.SetActivationFunction(AForge.Neuro.IActivationFunction)"/> method for each layer of the network.</para></remarks>
  1137. </member>
  1138. <member name="T:AForge.Neuro.Learning.EvolutionaryLearning">
  1139. <summary>
  1140. Neural networks' evolutionary learning algorithm, which is based on Genetic Algorithms.
  1141. </summary>
  1142. <remarks><para>The class implements supervised neural network's learning algorithm,
  1143. which is based on Genetic Algorithms. For the given neural network, it create a population
  1144. of <see cref="T:AForge.Genetic.DoubleArrayChromosome"/> chromosomes, which represent neural network's
  1145. weights. Then, during the learning process, the genetic population evolves and weights, which
  1146. are represented by the best chromosome, are set to the source neural network.</para>
  1147. <para>See <see cref="T:AForge.Genetic.Population"/> class for additional information about genetic population
  1148. and evolutionary based search.</para>
  1149. <para>Sample usage (training network to calculate XOR function):</para>
  1150. <code>
  1151. // initialize input and output values
  1152. double[][] input = new double[4][] {
  1153. new double[] {-1, 1}, new double[] {-1, 1},
  1154. new double[] { 1, -1}, new double[] { 1, 1}
  1155. };
  1156. double[][] output = new double[4][] {
  1157. new double[] {-1}, new double[] { 1},
  1158. new double[] { 1}, new double[] {-1}
  1159. };
  1160. // create neural network
  1161. ActivationNetwork network = new ActivationNetwork(
  1162. BipolarSigmoidFunction( 2 ),
  1163. 2, // two inputs in the network
  1164. 2, // two neurons in the first layer
  1165. 1 ); // one neuron in the second layer
  1166. // create teacher
  1167. EvolutionaryLearning teacher = new EvolutionaryLearning( network,
  1168. 100 ); // number of chromosomes in genetic population
  1169. // loop
  1170. while ( !needToStop )
  1171. {
  1172. // run epoch of learning procedure
  1173. double error = teacher.RunEpoch( input, output );
  1174. // check error value to see if we need to stop
  1175. // ...
  1176. }
  1177. </code>
  1178. </remarks>
  1179. <seealso cref="T:AForge.Neuro.Learning.BackPropagationLearning"/>
  1180. </member>
  1181. <member name="M:AForge.Neuro.Learning.EvolutionaryLearning.#ctor(AForge.Neuro.ActivationNetwork,System.Int32,AForge.Math.Random.IRandomNumberGenerator,AForge.Math.Random.IRandomNumberGenerator,AForge.Math.Random.IRandomNumberGenerator,AForge.Genetic.ISelectionMethod,System.Double,System.Double,System.Double)">
  1182. <summary>
  1183. Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.EvolutionaryLearning"/> class.
  1184. </summary>
  1185. <param name="activationNetwork">Activation network to be trained.</param>
  1186. <param name="populationSize">Size of genetic population.</param>
  1187. <param name="chromosomeGenerator">Random numbers generator used for initialization of genetic
  1188. population representing neural network's weights and thresholds (see <see cref="F:AForge.Genetic.DoubleArrayChromosome.chromosomeGenerator"/>).</param>
  1189. <param name="mutationMultiplierGenerator">Random numbers generator used to generate random
  1190. factors for multiplication of network's weights and thresholds during genetic mutation
  1191. (ses <see cref="F:AForge.Genetic.DoubleArrayChromosome.mutationMultiplierGenerator"/>.)</param>
  1192. <param name="mutationAdditionGenerator">Random numbers generator used to generate random
  1193. values added to neural network's weights and thresholds during genetic mutation
  1194. (see <see cref="F:AForge.Genetic.DoubleArrayChromosome.mutationAdditionGenerator"/>).</param>
  1195. <param name="selectionMethod">Method of selection best chromosomes in genetic population.</param>
  1196. <param name="crossOverRate">Crossover rate in genetic population (see
  1197. <see cref="P:AForge.Genetic.Population.CrossoverRate"/>).</param>
  1198. <param name="mutationRate">Mutation rate in genetic population (see
  1199. <see cref="P:AForge.Genetic.Population.MutationRate"/>).</param>
  1200. <param name="randomSelectionRate">Rate of injection of random chromosomes during selection
  1201. in genetic population (see <see cref="P:AForge.Genetic.Population.RandomSelectionPortion"/>).</param>
  1202. </member>
  1203. <member name="M:AForge.Neuro.Learning.EvolutionaryLearning.#ctor(AForge.Neuro.ActivationNetwork,System.Int32)">
  1204. <summary>
  1205. Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.EvolutionaryLearning"/> class.
  1206. </summary>
  1207. <param name="activationNetwork">Activation network to be trained.</param>
  1208. <param name="populationSize">Size of genetic population.</param>
  1209. <remarks><para>This version of constructor is used to create genetic population
  1210. for searching optimal neural network's weight using default set of parameters, which are:
  1211. <list type="bullet">
  1212. <item>Selection method - elite;</item>
  1213. <item>Crossover rate - 0.75;</item>
  1214. <item>Mutation rate - 0.25;</item>
  1215. <item>Rate of injection of random chromosomes during selection - 0.20;</item>
  1216. <item>Random numbers generator for initializing new chromosome -
  1217. <c>UniformGenerator( new Range( -1, 1 ) )</c>;</item>
  1218. <item>Random numbers generator used during mutation for genes' multiplication -
  1219. <c>ExponentialGenerator( 1 )</c>;</item>
  1220. <item>Random numbers generator used during mutation for adding random value to genes -
  1221. <c>UniformGenerator( new Range( -0.5f, 0.5f ) )</c>.</item>
  1222. </list></para>
  1223. <para>In order to have full control over the above default parameters, it is possible to
  1224. used extended version of constructor, which allows to specify all of the parameters.</para>
  1225. </remarks>
  1226. </member>
  1227. <member name="M:AForge.Neuro.Learning.EvolutionaryLearning.Run(System.Double[],System.Double[])">
  1228. <summary>
  1229. Runs learning iteration.
  1230. </summary>
  1231. <param name="input">Input vector.</param>
  1232. <param name="output">Desired output vector.</param>
  1233. <returns>Returns learning error.</returns>
  1234. <remarks><note>The method is not implemented, since evolutionary learning algorithm is global
  1235. and requires all inputs/outputs in order to run its one epoch. Use <see cref="M:AForge.Neuro.Learning.EvolutionaryLearning.RunEpoch(System.Double[][],System.Double[][])"/>
  1236. method instead.</note></remarks>
  1237. <exception cref="T:System.NotImplementedException">The method is not implemented by design.</exception>
  1238. </member>
  1239. <member name="M:AForge.Neuro.Learning.EvolutionaryLearning.RunEpoch(System.Double[][],System.Double[][])">
  1240. <summary>
  1241. Runs learning epoch.
  1242. </summary>
  1243. <param name="input">Array of input vectors.</param>
  1244. <param name="output">Array of output vectors.</param>
  1245. <returns>Returns summary squared learning error for the entire epoch.</returns>
  1246. <remarks><para><note>While running the neural network's learning process, it is required to
  1247. pass the same <paramref name="input"/> and <paramref name="output"/> values for each
  1248. epoch. On the very first run of the method it will initialize evolutionary fitness
  1249. function with the given input/output. So, changing input/output in middle of the learning
  1250. process, will break it.</note></para></remarks>
  1251. </member>
  1252. <member name="T:AForge.Neuro.DistanceLayer">
  1253. <summary>
  1254. Distance layer.
  1255. </summary>
  1256. <remarks>Distance layer is a layer of <see cref="T:AForge.Neuro.DistanceNeuron">distance neurons</see>.
  1257. The layer is usually a single layer of such networks as Kohonen Self
  1258. Organizing Map, Elastic Net, Hamming Memory Net.</remarks>
  1259. </member>
  1260. <member name="M:AForge.Neuro.DistanceLayer.#ctor(System.Int32,System.Int32)">
  1261. <summary>
  1262. Initializes a new instance of the <see cref="T:AForge.Neuro.DistanceLayer"/> class.
  1263. </summary>
  1264. <param name="neuronsCount">Layer's neurons count.</param>
  1265. <param name="inputsCount">Layer's inputs count.</param>
  1266. <remarks>The new layet is randomized (see <see cref="M:AForge.Neuro.Neuron.Randomize"/>
  1267. method) after it is created.</remarks>
  1268. </member>
  1269. <member name="T:AForge.Neuro.Learning.ResilientBackpropagationLearning">
  1270. <summary>
  1271. Resilient Backpropagation learning algorithm.
  1272. </summary>
  1273. <remarks><para>This class implements the resilient backpropagation (RProp)
  1274. learning algorithm. The RProp learning algorithm is one of the fastest learning
  1275. algorithms for feed-forward learning networks which use only first-order
  1276. information.</para>
  1277. <para>Sample usage (training network to calculate XOR function):</para>
  1278. <code>
  1279. // initialize input and output values
  1280. double[][] input = new double[4][] {
  1281. new double[] {0, 0}, new double[] {0, 1},
  1282. new double[] {1, 0}, new double[] {1, 1}
  1283. };
  1284. double[][] output = new double[4][] {
  1285. new double[] {0}, new double[] {1},
  1286. new double[] {1}, new double[] {0}
  1287. };
  1288. // create neural network
  1289. ActivationNetwork network = new ActivationNetwork(
  1290. SigmoidFunction( 2 ),
  1291. 2, // two inputs in the network
  1292. 2, // two neurons in the first layer
  1293. 1 ); // one neuron in the second layer
  1294. // create teacher
  1295. ResilientBackpropagationLearning teacher = new ResilientBackpropagationLearning( network );
  1296. // loop
  1297. while ( !needToStop )
  1298. {
  1299. // run epoch of learning procedure
  1300. double error = teacher.RunEpoch( input, output );
  1301. // check error value to see if we need to stop
  1302. // ...
  1303. }
  1304. </code>
  1305. </remarks>
  1306. </member>
  1307. <member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.#ctor(AForge.Neuro.ActivationNetwork)">
  1308. <summary>
  1309. Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.ResilientBackpropagationLearning"/> class.
  1310. </summary>
  1311. <param name="network">Network to teach.</param>
  1312. </member>
  1313. <member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.Run(System.Double[],System.Double[])">
  1314. <summary>
  1315. Runs learning iteration.
  1316. </summary>
  1317. <param name="input">Input vector.</param>
  1318. <param name="output">Desired output vector.</param>
  1319. <returns>Returns squared error (difference between current network's output and
  1320. desired output) divided by 2.</returns>
  1321. <remarks><para>Runs one learning iteration and updates neuron's
  1322. weights.</para></remarks>
  1323. </member>
  1324. <member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.RunEpoch(System.Double[][],System.Double[][])">
  1325. <summary>
  1326. Runs learning epoch.
  1327. </summary>
  1328. <param name="input">Array of input vectors.</param>
  1329. <param name="output">Array of output vectors.</param>
  1330. <returns>Returns summary learning error for the epoch. See <see cref="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.Run(System.Double[],System.Double[])"/>
  1331. method for details about learning error calculation.</returns>
  1332. <remarks><para>The method runs one learning epoch, by calling <see cref="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.Run(System.Double[],System.Double[])"/> method
  1333. for each vector provided in the <paramref name="input"/> array.</para></remarks>
  1334. </member>
  1335. <member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.ResetGradient">
  1336. <summary>
  1337. Resets current weight and threshold derivatives.
  1338. </summary>
  1339. </member>
  1340. <member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.ResetUpdates(System.Double)">
  1341. <summary>
  1342. Resets the current update steps using the given learning rate.
  1343. </summary>
  1344. </member>
  1345. <member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.UpdateNetwork">
  1346. <summary>
  1347. Update network's weights.
  1348. </summary>
  1349. </member>
  1350. <member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.CalculateError(System.Double[])">
  1351. <summary>
  1352. Calculates error values for all neurons of the network.
  1353. </summary>
  1354. <param name="desiredOutput">Desired output vector.</param>
  1355. <returns>Returns summary squared error of the last layer divided by 2.</returns>
  1356. </member>
  1357. <member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.CalculateGradient(System.Double[])">
  1358. <summary>
  1359. Calculate weights updates
  1360. </summary>
  1361. <param name="input">Network's input vector.</param>
  1362. </member>
  1363. <member name="P:AForge.Neuro.Learning.ResilientBackpropagationLearning.LearningRate">
  1364. <summary>
  1365. Learning rate.
  1366. </summary>
  1367. <remarks><para>The value determines speed of learning.</para>
  1368. <para>Default value equals to <b>0.0125</b>.</para>
  1369. </remarks>
  1370. </member>
  1371. <member name="T:AForge.Neuro.DistanceNeuron">
  1372. <summary>
  1373. Distance neuron.
  1374. </summary>
  1375. <remarks><para>Distance neuron computes its output as distance between
  1376. its weights and inputs - sum of absolute differences between weights'
  1377. values and corresponding inputs' values. The neuron is usually used in Kohonen
  1378. Self Organizing Map.</para></remarks>
  1379. </member>
  1380. <member name="M:AForge.Neuro.DistanceNeuron.#ctor(System.Int32)">
  1381. <summary>
  1382. Initializes a new instance of the <see cref="T:AForge.Neuro.DistanceNeuron"/> class.
  1383. </summary>
  1384. <param name="inputs">Neuron's inputs count.</param>
  1385. </member>
  1386. <member name="M:AForge.Neuro.DistanceNeuron.Compute(System.Double[])">
  1387. <summary>
  1388. Computes output value of neuron.
  1389. </summary>
  1390. <param name="input">Input vector.</param>
  1391. <returns>Returns neuron's output value.</returns>
  1392. <remarks><para>The output value of distance neuron is equal to the distance
  1393. between its weights and inputs - sum of absolute differences.
  1394. The output value is also stored in <see cref="P:AForge.Neuro.Neuron.Output">Output</see>
  1395. property.</para>
  1396. <para><note>The method may be called safely from multiple threads to compute neuron's
  1397. output value for the specified input values. However, the value of
  1398. <see cref="P:AForge.Neuro.Neuron.Output"/> property in multi-threaded environment is not predictable,
  1399. since it may hold neuron's output computed from any of the caller threads. Multi-threaded
  1400. access to the method is useful in those cases when it is required to improve performance
  1401. by utilizing several threads and the computation is based on the immediate return value
  1402. of the method, but not on neuron's output property.</note></para>
  1403. </remarks>
  1404. <exception cref="T:System.ArgumentException">Wrong length of the input vector, which is not
  1405. equal to the <see cref="P:AForge.Neuro.Neuron.InputsCount">expected value</see>.</exception>
  1406. </member>
  1407. </members>
  1408. </doc>