You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1792 lines
80 KiB

1 month ago
  1. <?xml version="1.0"?>
  2. <doc>
  3. <assembly>
  4. <name>AForge.Neuro</name>
  5. </assembly>
  6. <members>
  7. <member name="T:AForge.Neuro.BipolarSigmoidFunction">
  8. <summary>
  9. Bipolar sigmoid activation function.
  10. </summary>
  11. <remarks><para>The class represents bipolar sigmoid activation function with
  12. the next expression:
  13. <code lang="none">
  14. 2
  15. f(x) = ------------------ - 1
  16. 1 + exp(-alpha * x)
  17. 2 * alpha * exp(-alpha * x )
  18. f'(x) = -------------------------------- = alpha * (1 - f(x)^2) / 2
  19. (1 + exp(-alpha * x))^2
  20. </code>
  21. </para>
  22. <para>Output range of the function: <b>[-1, 1]</b>.</para>
  23. <para>Functions graph:</para>
  24. <img src="img/neuro/sigmoid_bipolar.bmp" width="242" height="172" />
  25. </remarks>
  26. </member>
  27. <member name="P:AForge.Neuro.BipolarSigmoidFunction.Alpha">
  28. <summary>
  29. Sigmoid's alpha value.
  30. </summary>
  31. <remarks><para>The value determines steepness of the function. Increasing value of
  32. this property changes sigmoid to look more like a threshold function. Decreasing
  33. value of this property makes sigmoid to be very smooth (slowly growing from its
  34. minimum value to its maximum value).</para>
  35. <para>Default value is set to <b>2</b>.</para>
  36. </remarks>
  37. </member>
  38. <member name="M:AForge.Neuro.BipolarSigmoidFunction.#ctor">
  39. <summary>
  40. Initializes a new instance of the <see cref="T:AForge.Neuro.SigmoidFunction"/> class.
  41. </summary>
  42. </member>
  43. <member name="M:AForge.Neuro.BipolarSigmoidFunction.#ctor(System.Double)">
  44. <summary>
  45. Initializes a new instance of the <see cref="T:AForge.Neuro.BipolarSigmoidFunction"/> class.
  46. </summary>
  47. <param name="alpha">Sigmoid's alpha value.</param>
  48. </member>
  49. <member name="M:AForge.Neuro.BipolarSigmoidFunction.Function(System.Double)">
  50. <summary>
  51. Calculates function value.
  52. </summary>
  53. <param name="x">Function input value.</param>
  54. <returns>Function output value, <i>f(x)</i>.</returns>
  55. <remarks>The method calculates function value at point <paramref name="x"/>.</remarks>
  56. </member>
  57. <member name="M:AForge.Neuro.BipolarSigmoidFunction.Derivative(System.Double)">
  58. <summary>
  59. Calculates function derivative.
  60. </summary>
  61. <param name="x">Function input value.</param>
  62. <returns>Function derivative, <i>f'(x)</i>.</returns>
  63. <remarks>The method calculates function derivative at point <paramref name="x"/>.</remarks>
  64. </member>
  65. <member name="M:AForge.Neuro.BipolarSigmoidFunction.Derivative2(System.Double)">
  66. <summary>
  67. Calculates function derivative.
  68. </summary>
  69. <param name="y">Function output value - the value, which was obtained
  70. with the help of <see cref="M:AForge.Neuro.BipolarSigmoidFunction.Function(System.Double)"/> method.</param>
  71. <returns>Function derivative, <i>f'(x)</i>.</returns>
  72. <remarks><para>The method calculates the same derivative value as the
  73. <see cref="M:AForge.Neuro.BipolarSigmoidFunction.Derivative(System.Double)"/> method, but it takes not the input <b>x</b> value
  74. itself, but the function value, which was calculated previously with
  75. the help of <see cref="M:AForge.Neuro.BipolarSigmoidFunction.Function(System.Double)"/> method.</para>
  76. <para><note>Some applications require as function value, as derivative value,
  77. so they can save the amount of calculations using this method to calculate derivative.</note></para>
  78. </remarks>
  79. </member>
  80. <member name="M:AForge.Neuro.BipolarSigmoidFunction.Clone">
  81. <summary>
  82. Creates a new object that is a copy of the current instance.
  83. </summary>
  84. <returns>
  85. A new object that is a copy of this instance.
  86. </returns>
  87. </member>
  88. <member name="T:AForge.Neuro.IActivationFunction">
  89. <summary>
  90. Activation function interface.
  91. </summary>
  92. <remarks>All activation functions, which are supposed to be used with
  93. neurons, which calculate their output as a function of weighted sum of
  94. their inputs, should implement this interfaces.
  95. </remarks>
  96. </member>
  97. <member name="M:AForge.Neuro.IActivationFunction.Function(System.Double)">
  98. <summary>
  99. Calculates function value.
  100. </summary>
  101. <param name="x">Function input value.</param>
  102. <returns>Function output value, <i>f(x)</i>.</returns>
  103. <remarks>The method calculates function value at point <paramref name="x"/>.</remarks>
  104. </member>
  105. <member name="M:AForge.Neuro.IActivationFunction.Derivative(System.Double)">
  106. <summary>
  107. Calculates function derivative.
  108. </summary>
  109. <param name="x">Function input value.</param>
  110. <returns>Function derivative, <i>f'(x)</i>.</returns>
  111. <remarks>The method calculates function derivative at point <paramref name="x"/>.</remarks>
  112. </member>
  113. <member name="M:AForge.Neuro.IActivationFunction.Derivative2(System.Double)">
  114. <summary>
  115. Calculates function derivative.
  116. </summary>
  117. <param name="y">Function output value - the value, which was obtained
  118. with the help of <see cref="M:AForge.Neuro.IActivationFunction.Function(System.Double)"/> method.</param>
  119. <returns>Function derivative, <i>f'(x)</i>.</returns>
  120. <remarks><para>The method calculates the same derivative value as the
  121. <see cref="M:AForge.Neuro.IActivationFunction.Derivative(System.Double)"/> method, but it takes not the input <b>x</b> value
  122. itself, but the function value, which was calculated previously with
  123. the help of <see cref="M:AForge.Neuro.IActivationFunction.Function(System.Double)"/> method.</para>
  124. <para><note>Some applications require as function value, as derivative value,
  125. so they can save the amount of calculations using this method to calculate derivative.</note></para>
  126. </remarks>
  127. </member>
  128. <member name="T:AForge.Neuro.SigmoidFunction">
  129. <summary>
  130. Sigmoid activation function.
  131. </summary>
  132. <remarks><para>The class represents sigmoid activation function with
  133. the next expression:
  134. <code lang="none">
  135. 1
  136. f(x) = ------------------
  137. 1 + exp(-alpha * x)
  138. alpha * exp(-alpha * x )
  139. f'(x) = ---------------------------- = alpha * f(x) * (1 - f(x))
  140. (1 + exp(-alpha * x))^2
  141. </code>
  142. </para>
  143. <para>Output range of the function: <b>[0, 1]</b>.</para>
  144. <para>Functions graph:</para>
  145. <img src="img/neuro/sigmoid.bmp" width="242" height="172" />
  146. </remarks>
  147. </member>
  148. <member name="P:AForge.Neuro.SigmoidFunction.Alpha">
  149. <summary>
  150. Sigmoid's alpha value.
  151. </summary>
  152. <remarks><para>The value determines steepness of the function. Increasing value of
  153. this property changes sigmoid to look more like a threshold function. Decreasing
  154. value of this property makes sigmoid to be very smooth (slowly growing from its
  155. minimum value to its maximum value).</para>
  156. <para>Default value is set to <b>2</b>.</para>
  157. </remarks>
  158. </member>
  159. <member name="M:AForge.Neuro.SigmoidFunction.#ctor">
  160. <summary>
  161. Initializes a new instance of the <see cref="T:AForge.Neuro.SigmoidFunction"/> class.
  162. </summary>
  163. </member>
  164. <member name="M:AForge.Neuro.SigmoidFunction.#ctor(System.Double)">
  165. <summary>
  166. Initializes a new instance of the <see cref="T:AForge.Neuro.SigmoidFunction"/> class.
  167. </summary>
  168. <param name="alpha">Sigmoid's alpha value.</param>
  169. </member>
  170. <member name="M:AForge.Neuro.SigmoidFunction.Function(System.Double)">
  171. <summary>
  172. Calculates function value.
  173. </summary>
  174. <param name="x">Function input value.</param>
  175. <returns>Function output value, <i>f(x)</i>.</returns>
  176. <remarks>The method calculates function value at point <paramref name="x"/>.</remarks>
  177. </member>
  178. <member name="M:AForge.Neuro.SigmoidFunction.Derivative(System.Double)">
  179. <summary>
  180. Calculates function derivative.
  181. </summary>
  182. <param name="x">Function input value.</param>
  183. <returns>Function derivative, <i>f'(x)</i>.</returns>
  184. <remarks>The method calculates function derivative at point <paramref name="x"/>.</remarks>
  185. </member>
  186. <member name="M:AForge.Neuro.SigmoidFunction.Derivative2(System.Double)">
  187. <summary>
  188. Calculates function derivative.
  189. </summary>
  190. <param name="y">Function output value - the value, which was obtained
  191. with the help of <see cref="M:AForge.Neuro.SigmoidFunction.Function(System.Double)"/> method.</param>
  192. <returns>Function derivative, <i>f'(x)</i>.</returns>
  193. <remarks><para>The method calculates the same derivative value as the
  194. <see cref="M:AForge.Neuro.SigmoidFunction.Derivative(System.Double)"/> method, but it takes not the input <b>x</b> value
  195. itself, but the function value, which was calculated previously with
  196. the help of <see cref="M:AForge.Neuro.SigmoidFunction.Function(System.Double)"/> method.</para>
  197. <para><note>Some applications require as function value, as derivative value,
  198. so they can save the amount of calculations using this method to calculate derivative.</note></para>
  199. </remarks>
  200. </member>
  201. <member name="M:AForge.Neuro.SigmoidFunction.Clone">
  202. <summary>
  203. Creates a new object that is a copy of the current instance.
  204. </summary>
  205. <returns>
  206. A new object that is a copy of this instance.
  207. </returns>
  208. </member>
  209. <member name="T:AForge.Neuro.ThresholdFunction">
  210. <summary>
  211. Threshold activation function.
  212. </summary>
  213. <remarks><para>The class represents threshold activation function with
  214. the next expression:
  215. <code lang="none">
  216. f(x) = 1, if x >= 0, otherwise 0
  217. </code>
  218. </para>
  219. <para>Output range of the function: <b>[0, 1]</b>.</para>
  220. <para>Functions graph:</para>
  221. <img src="img/neuro/threshold.bmp" width="242" height="172" />
  222. </remarks>
  223. </member>
  224. <member name="M:AForge.Neuro.ThresholdFunction.#ctor">
  225. <summary>
  226. Initializes a new instance of the <see cref="T:AForge.Neuro.ThresholdFunction"/> class.
  227. </summary>
  228. </member>
  229. <member name="M:AForge.Neuro.ThresholdFunction.Function(System.Double)">
  230. <summary>
  231. Calculates function value.
  232. </summary>
  233. <param name="x">Function input value.</param>
  234. <returns>Function output value, <i>f(x)</i>.</returns>
  235. <remarks>The method calculates function value at point <paramref name="x"/>.</remarks>
  236. </member>
  237. <member name="M:AForge.Neuro.ThresholdFunction.Derivative(System.Double)">
  238. <summary>
  239. Calculates function derivative (not supported).
  240. </summary>
  241. <param name="x">Input value.</param>
  242. <returns>Always returns 0.</returns>
  243. <remarks><para><note>The method is not supported, because it is not possible to
  244. calculate derivative of the function.</note></para></remarks>
  245. </member>
  246. <member name="M:AForge.Neuro.ThresholdFunction.Derivative2(System.Double)">
  247. <summary>
  248. Calculates function derivative (not supported).
  249. </summary>
  250. <param name="y">Input value.</param>
  251. <returns>Always returns 0.</returns>
  252. <remarks><para><note>The method is not supported, because it is not possible to
  253. calculate derivative of the function.</note></para></remarks>
  254. </member>
  255. <member name="M:AForge.Neuro.ThresholdFunction.Clone">
  256. <summary>
  257. Creates a new object that is a copy of the current instance.
  258. </summary>
  259. <returns>
  260. A new object that is a copy of this instance.
  261. </returns>
  262. </member>
  263. <member name="T:AForge.Neuro.ActivationLayer">
  264. <summary>
  265. Activation layer.
  266. </summary>
  267. <remarks>Activation layer is a layer of <see cref="T:AForge.Neuro.ActivationNeuron">activation neurons</see>.
  268. The layer is usually used in multi-layer neural networks.</remarks>
  269. </member>
  270. <member name="M:AForge.Neuro.ActivationLayer.#ctor(System.Int32,System.Int32,AForge.Neuro.IActivationFunction)">
  271. <summary>
  272. Initializes a new instance of the <see cref="T:AForge.Neuro.ActivationLayer"/> class.
  273. </summary>
  274. <param name="neuronsCount">Layer's neurons count.</param>
  275. <param name="inputsCount">Layer's inputs count.</param>
  276. <param name="function">Activation function of neurons of the layer.</param>
  277. <remarks>The new layer is randomized (see <see cref="M:AForge.Neuro.ActivationNeuron.Randomize"/>
  278. method) after it is created.</remarks>
  279. </member>
  280. <member name="M:AForge.Neuro.ActivationLayer.SetActivationFunction(AForge.Neuro.IActivationFunction)">
  281. <summary>
  282. Set new activation function for all neurons of the layer.
  283. </summary>
  284. <param name="function">Activation function to set.</param>
  285. <remarks><para>The methods sets new activation function for each neuron by setting
  286. their <see cref="P:AForge.Neuro.ActivationNeuron.ActivationFunction"/> property.</para></remarks>
  287. </member>
  288. <member name="T:AForge.Neuro.DistanceLayer">
  289. <summary>
  290. Distance layer.
  291. </summary>
  292. <remarks>Distance layer is a layer of <see cref="T:AForge.Neuro.DistanceNeuron">distance neurons</see>.
  293. The layer is usually a single layer of such networks as Kohonen Self
  294. Organizing Map, Elastic Net, Hamming Memory Net.</remarks>
  295. </member>
  296. <member name="M:AForge.Neuro.DistanceLayer.#ctor(System.Int32,System.Int32)">
  297. <summary>
  298. Initializes a new instance of the <see cref="T:AForge.Neuro.DistanceLayer"/> class.
  299. </summary>
  300. <param name="neuronsCount">Layer's neurons count.</param>
  301. <param name="inputsCount">Layer's inputs count.</param>
  302. <remarks>The new layet is randomized (see <see cref="M:AForge.Neuro.Neuron.Randomize"/>
  303. method) after it is created.</remarks>
  304. </member>
  305. <member name="T:AForge.Neuro.Layer">
  306. <summary>
  307. Base neural layer class.
  308. </summary>
  309. <remarks>This is a base neural layer class, which represents
  310. collection of neurons.</remarks>
  311. </member>
  312. <member name="F:AForge.Neuro.Layer.inputsCount">
  313. <summary>
  314. Layer's inputs count.
  315. </summary>
  316. </member>
  317. <member name="F:AForge.Neuro.Layer.neuronsCount">
  318. <summary>
  319. Layer's neurons count.
  320. </summary>
  321. </member>
  322. <member name="F:AForge.Neuro.Layer.neurons">
  323. <summary>
  324. Layer's neurons.
  325. </summary>
  326. </member>
  327. <member name="F:AForge.Neuro.Layer.output">
  328. <summary>
  329. Layer's output vector.
  330. </summary>
  331. </member>
  332. <member name="P:AForge.Neuro.Layer.InputsCount">
  333. <summary>
  334. Layer's inputs count.
  335. </summary>
  336. </member>
  337. <member name="P:AForge.Neuro.Layer.Neurons">
  338. <summary>
  339. Layer's neurons.
  340. </summary>
  341. </member>
  342. <member name="P:AForge.Neuro.Layer.Output">
  343. <summary>
  344. Layer's output vector.
  345. </summary>
  346. <remarks><para>The calculation way of layer's output vector is determined by neurons,
  347. which comprise the layer.</para>
  348. <para><note>The property is not initialized (equals to <see langword="null"/>) until
  349. <see cref="M:AForge.Neuro.Layer.Compute(System.Double[])"/> method is called.</note></para>
  350. </remarks>
  351. </member>
  352. <member name="M:AForge.Neuro.Layer.#ctor(System.Int32,System.Int32)">
  353. <summary>
  354. Initializes a new instance of the <see cref="T:AForge.Neuro.Layer"/> class.
  355. </summary>
  356. <param name="neuronsCount">Layer's neurons count.</param>
  357. <param name="inputsCount">Layer's inputs count.</param>
  358. <remarks>Protected contructor, which initializes <see cref="F:AForge.Neuro.Layer.inputsCount"/>,
  359. <see cref="F:AForge.Neuro.Layer.neuronsCount"/> and <see cref="F:AForge.Neuro.Layer.neurons"/> members.</remarks>
  360. </member>
  361. <member name="M:AForge.Neuro.Layer.Compute(System.Double[])">
  362. <summary>
  363. Compute output vector of the layer.
  364. </summary>
  365. <param name="input">Input vector.</param>
  366. <returns>Returns layer's output vector.</returns>
  367. <remarks><para>The actual layer's output vector is determined by neurons,
  368. which comprise the layer - consists of output values of layer's neurons.
  369. The output vector is also stored in <see cref="P:AForge.Neuro.Layer.Output"/> property.</para>
  370. <para><note>The method may be called safely from multiple threads to compute layer's
  371. output value for the specified input values. However, the value of
  372. <see cref="P:AForge.Neuro.Layer.Output"/> property in multi-threaded environment is not predictable,
  373. since it may hold layer's output computed from any of the caller threads. Multi-threaded
  374. access to the method is useful in those cases when it is required to improve performance
  375. by utilizing several threads and the computation is based on the immediate return value
  376. of the method, but not on layer's output property.</note></para>
  377. </remarks>
  378. </member>
  379. <member name="M:AForge.Neuro.Layer.Randomize">
  380. <summary>
  381. Randomize neurons of the layer.
  382. </summary>
  383. <remarks>Randomizes layer's neurons by calling <see cref="M:AForge.Neuro.Neuron.Randomize"/> method
  384. of each neuron.</remarks>
  385. </member>
  386. <member name="T:AForge.Neuro.Learning.BackPropagationLearning">
  387. <summary>
  388. Back propagation learning algorithm.
  389. </summary>
  390. <remarks><para>The class implements back propagation learning algorithm,
  391. which is widely used for training multi-layer neural networks with
  392. continuous activation functions.</para>
  393. <para>Sample usage (training network to calculate XOR function):</para>
  394. <code>
  395. // initialize input and output values
  396. double[][] input = new double[4][] {
  397. new double[] {0, 0}, new double[] {0, 1},
  398. new double[] {1, 0}, new double[] {1, 1}
  399. };
  400. double[][] output = new double[4][] {
  401. new double[] {0}, new double[] {1},
  402. new double[] {1}, new double[] {0}
  403. };
  404. // create neural network
  405. ActivationNetwork network = new ActivationNetwork(
  406. SigmoidFunction( 2 ),
  407. 2, // two inputs in the network
  408. 2, // two neurons in the first layer
  409. 1 ); // one neuron in the second layer
  410. // create teacher
  411. BackPropagationLearning teacher = new BackPropagationLearning( network );
  412. // loop
  413. while ( !needToStop )
  414. {
  415. // run epoch of learning procedure
  416. double error = teacher.RunEpoch( input, output );
  417. // check error value to see if we need to stop
  418. // ...
  419. }
  420. </code>
  421. </remarks>
  422. <seealso cref="T:AForge.Neuro.Learning.EvolutionaryLearning"/>
  423. </member>
  424. <member name="P:AForge.Neuro.Learning.BackPropagationLearning.LearningRate">
  425. <summary>
  426. Learning rate, [0, 1].
  427. </summary>
  428. <remarks><para>The value determines speed of learning.</para>
  429. <para>Default value equals to <b>0.1</b>.</para>
  430. </remarks>
  431. </member>
  432. <member name="P:AForge.Neuro.Learning.BackPropagationLearning.Momentum">
  433. <summary>
  434. Momentum, [0, 1].
  435. </summary>
  436. <remarks><para>The value determines the portion of previous weight's update
  437. to use on current iteration. Weight's update values are calculated on
  438. each iteration depending on neuron's error. The momentum specifies the amount
  439. of update to use from previous iteration and the amount of update
  440. to use from current iteration. If the value is equal to 0.1, for example,
  441. then 0.1 portion of previous update and 0.9 portion of current update are used
  442. to update weight's value.</para>
  443. <para>Default value equals to <b>0.0</b>.</para>
  444. </remarks>
  445. </member>
  446. <member name="M:AForge.Neuro.Learning.BackPropagationLearning.#ctor(AForge.Neuro.ActivationNetwork)">
  447. <summary>
  448. Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.BackPropagationLearning"/> class.
  449. </summary>
  450. <param name="network">Network to teach.</param>
  451. </member>
  452. <member name="M:AForge.Neuro.Learning.BackPropagationLearning.Run(System.Double[],System.Double[])">
  453. <summary>
  454. Runs learning iteration.
  455. </summary>
  456. <param name="input">Input vector.</param>
  457. <param name="output">Desired output vector.</param>
  458. <returns>Returns squared error (difference between current network's output and
  459. desired output) divided by 2.</returns>
  460. <remarks><para>Runs one learning iteration and updates neuron's
  461. weights.</para></remarks>
  462. </member>
  463. <member name="M:AForge.Neuro.Learning.BackPropagationLearning.RunEpoch(System.Double[][],System.Double[][])">
  464. <summary>
  465. Runs learning epoch.
  466. </summary>
  467. <param name="input">Array of input vectors.</param>
  468. <param name="output">Array of output vectors.</param>
  469. <returns>Returns summary learning error for the epoch. See <see cref="M:AForge.Neuro.Learning.BackPropagationLearning.Run(System.Double[],System.Double[])"/>
  470. method for details about learning error calculation.</returns>
  471. <remarks><para>The method runs one learning epoch, by calling <see cref="M:AForge.Neuro.Learning.BackPropagationLearning.Run(System.Double[],System.Double[])"/> method
  472. for each vector provided in the <paramref name="input"/> array.</para></remarks>
  473. </member>
  474. <member name="M:AForge.Neuro.Learning.BackPropagationLearning.CalculateError(System.Double[])">
  475. <summary>
  476. Calculates error values for all neurons of the network.
  477. </summary>
  478. <param name="desiredOutput">Desired output vector.</param>
  479. <returns>Returns summary squared error of the last layer divided by 2.</returns>
  480. </member>
  481. <member name="M:AForge.Neuro.Learning.BackPropagationLearning.CalculateUpdates(System.Double[])">
  482. <summary>
  483. Calculate weights updates.
  484. </summary>
  485. <param name="input">Network's input vector.</param>
  486. </member>
  487. <member name="M:AForge.Neuro.Learning.BackPropagationLearning.UpdateNetwork">
  488. <summary>
  489. Update network'sweights.
  490. </summary>
  491. </member>
  492. <member name="T:AForge.Neuro.Learning.DeltaRuleLearning">
  493. <summary>
  494. Delta rule learning algorithm.
  495. </summary>
  496. <remarks><para>This learning algorithm is used to train one layer neural
  497. network of <see cref="T:AForge.Neuro.ActivationNeuron">Activation Neurons</see>
  498. with continuous activation function, see <see cref="T:AForge.Neuro.SigmoidFunction"/>
  499. for example.</para>
  500. <para>See information about <a href="http://en.wikipedia.org/wiki/Delta_rule">delta rule</a>
  501. learning algorithm.</para>
  502. </remarks>
  503. </member>
  504. <member name="P:AForge.Neuro.Learning.DeltaRuleLearning.LearningRate">
  505. <summary>
  506. Learning rate, [0, 1].
  507. </summary>
  508. <remarks><para>The value determines speed of learning.</para>
  509. <para>Default value equals to <b>0.1</b>.</para>
  510. </remarks>
  511. </member>
  512. <member name="M:AForge.Neuro.Learning.DeltaRuleLearning.#ctor(AForge.Neuro.ActivationNetwork)">
  513. <summary>
  514. Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.DeltaRuleLearning"/> class.
  515. </summary>
  516. <param name="network">Network to teach.</param>
  517. <exception cref="T:System.ArgumentException">Invalid nuaral network. It should have one layer only.</exception>
  518. </member>
  519. <member name="M:AForge.Neuro.Learning.DeltaRuleLearning.Run(System.Double[],System.Double[])">
  520. <summary>
  521. Runs learning iteration.
  522. </summary>
  523. <param name="input">Input vector.</param>
  524. <param name="output">Desired output vector.</param>
  525. <returns>Returns squared error (difference between current network's output and
  526. desired output) divided by 2.</returns>
  527. <remarks><para>Runs one learning iteration and updates neuron's
  528. weights.</para></remarks>
  529. </member>
  530. <member name="M:AForge.Neuro.Learning.DeltaRuleLearning.RunEpoch(System.Double[][],System.Double[][])">
  531. <summary>
  532. Runs learning epoch.
  533. </summary>
  534. <param name="input">Array of input vectors.</param>
  535. <param name="output">Array of output vectors.</param>
  536. <returns>Returns summary learning error for the epoch. See <see cref="M:AForge.Neuro.Learning.DeltaRuleLearning.Run(System.Double[],System.Double[])"/>
  537. method for details about learning error calculation.</returns>
  538. <remarks><para>The method runs one learning epoch, by calling <see cref="M:AForge.Neuro.Learning.DeltaRuleLearning.Run(System.Double[],System.Double[])"/> method
  539. for each vector provided in the <paramref name="input"/> array.</para></remarks>
  540. </member>
  541. <member name="T:AForge.Neuro.Learning.ElasticNetworkLearning">
  542. <summary>
  543. Elastic network learning algorithm.
  544. </summary>
  545. <remarks><para>This class implements elastic network's learning algorithm and
  546. allows to train <see cref="T:AForge.Neuro.DistanceNetwork">Distance Networks</see>.</para>
  547. </remarks>
  548. </member>
  549. <member name="P:AForge.Neuro.Learning.ElasticNetworkLearning.LearningRate">
  550. <summary>
  551. Learning rate, [0, 1].
  552. </summary>
  553. <remarks><para>Determines speed of learning.</para>
  554. <para>Default value equals to <b>0.1</b>.</para>
  555. </remarks>
  556. </member>
  557. <member name="P:AForge.Neuro.Learning.ElasticNetworkLearning.LearningRadius">
  558. <summary>
  559. Learning radius, [0, 1].
  560. </summary>
  561. <remarks><para>Determines the amount of neurons to be updated around
  562. winner neuron. Neurons, which are in the circle of specified radius,
  563. are updated during the learning procedure. Neurons, which are closer
  564. to the winner neuron, get more update.</para>
  565. <para>Default value equals to <b>0.5</b>.</para>
  566. </remarks>
  567. </member>
  568. <member name="M:AForge.Neuro.Learning.ElasticNetworkLearning.#ctor(AForge.Neuro.DistanceNetwork)">
  569. <summary>
  570. Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.ElasticNetworkLearning"/> class.
  571. </summary>
  572. <param name="network">Neural network to train.</param>
  573. </member>
  574. <member name="M:AForge.Neuro.Learning.ElasticNetworkLearning.Run(System.Double[])">
  575. <summary>
  576. Runs learning iteration.
  577. </summary>
  578. <param name="input">Input vector.</param>
  579. <returns>Returns learning error - summary absolute difference between neurons'
  580. weights and appropriate inputs. The difference is measured according to the neurons
  581. distance to the winner neuron.</returns>
  582. <remarks><para>The method runs one learning iterations - finds winner neuron (the neuron
  583. which has weights with values closest to the specified input vector) and updates its weight
  584. (as well as weights of neighbor neurons) in the way to decrease difference with the specified
  585. input vector.</para></remarks>
  586. </member>
  587. <member name="M:AForge.Neuro.Learning.ElasticNetworkLearning.RunEpoch(System.Double[][])">
  588. <summary>
  589. Runs learning epoch.
  590. </summary>
  591. <param name="input">Array of input vectors.</param>
  592. <returns>Returns summary learning error for the epoch. See <see cref="M:AForge.Neuro.Learning.ElasticNetworkLearning.Run(System.Double[])"/>
  593. method for details about learning error calculation.</returns>
  594. <remarks><para>The method runs one learning epoch, by calling <see cref="M:AForge.Neuro.Learning.ElasticNetworkLearning.Run(System.Double[])"/> method
  595. for each vector provided in the <paramref name="input"/> array.</para></remarks>
  596. </member>
  597. <member name="T:AForge.Neuro.Learning.EvolutionaryFitness">
  598. <summary>
  599. Fitness function used for chromosomes representing collection of neural network's weights.
  600. </summary>
  601. </member>
  602. <member name="M:AForge.Neuro.Learning.EvolutionaryFitness.#ctor(AForge.Neuro.ActivationNetwork,System.Double[][],System.Double[][])">
  603. <summary>
  604. Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.EvolutionaryFitness"/> class.
  605. </summary>
  606. <param name="network">Neural network for which fitness will be calculated.</param>
  607. <param name="input">Input data samples for neural network.</param>
  608. <param name="output">Output data sampels for neural network (desired output).</param>
  609. <exception cref="T:System.ArgumentException">Length of inputs and outputs arrays must be equal and greater than 0.</exception>
  610. <exception cref="T:System.ArgumentException">Length of each input vector must be equal to neural network's inputs count.</exception>
  611. </member>
  612. <member name="M:AForge.Neuro.Learning.EvolutionaryFitness.Evaluate(AForge.Genetic.IChromosome)">
  613. <summary>
  614. Evaluates chromosome.
  615. </summary>
  616. <param name="chromosome">Chromosome to evaluate.</param>
  617. <returns>Returns chromosome's fitness value.</returns>
  618. <remarks>The method calculates fitness value of the specified
  619. chromosome.</remarks>
  620. </member>
  621. <member name="T:AForge.Neuro.Learning.EvolutionaryLearning">
  622. <summary>
  623. Neural networks' evolutionary learning algorithm, which is based on Genetic Algorithms.
  624. </summary>
  625. <remarks><para>The class implements supervised neural network's learning algorithm,
  626. which is based on Genetic Algorithms. For the given neural network, it create a population
  627. of <see cref="T:AForge.Genetic.DoubleArrayChromosome"/> chromosomes, which represent neural network's
  628. weights. Then, during the learning process, the genetic population evolves and weights, which
  629. are represented by the best chromosome, are set to the source neural network.</para>
  630. <para>See <see cref="T:AForge.Genetic.Population"/> class for additional information about genetic population
  631. and evolutionary based search.</para>
  632. <para>Sample usage (training network to calculate XOR function):</para>
  633. <code>
  634. // initialize input and output values
  635. double[][] input = new double[4][] {
  636. new double[] {-1, 1}, new double[] {-1, 1},
  637. new double[] { 1, -1}, new double[] { 1, 1}
  638. };
  639. double[][] output = new double[4][] {
  640. new double[] {-1}, new double[] { 1},
  641. new double[] { 1}, new double[] {-1}
  642. };
  643. // create neural network
  644. ActivationNetwork network = new ActivationNetwork(
  645. BipolarSigmoidFunction( 2 ),
  646. 2, // two inputs in the network
  647. 2, // two neurons in the first layer
  648. 1 ); // one neuron in the second layer
  649. // create teacher
  650. EvolutionaryLearning teacher = new EvolutionaryLearning( network,
  651. 100 ); // number of chromosomes in genetic population
  652. // loop
  653. while ( !needToStop )
  654. {
  655. // run epoch of learning procedure
  656. double error = teacher.RunEpoch( input, output );
  657. // check error value to see if we need to stop
  658. // ...
  659. }
  660. </code>
  661. </remarks>
  662. <seealso cref="T:AForge.Neuro.Learning.BackPropagationLearning"/>
  663. </member>
  664. <member name="M:AForge.Neuro.Learning.EvolutionaryLearning.#ctor(AForge.Neuro.ActivationNetwork,System.Int32,AForge.Math.Random.IRandomNumberGenerator,AForge.Math.Random.IRandomNumberGenerator,AForge.Math.Random.IRandomNumberGenerator,AForge.Genetic.ISelectionMethod,System.Double,System.Double,System.Double)">
  665. <summary>
  666. Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.EvolutionaryLearning"/> class.
  667. </summary>
  668. <param name="activationNetwork">Activation network to be trained.</param>
  669. <param name="populationSize">Size of genetic population.</param>
  670. <param name="chromosomeGenerator">Random numbers generator used for initialization of genetic
  671. population representing neural network's weights and thresholds (see <see cref="F:AForge.Genetic.DoubleArrayChromosome.chromosomeGenerator"/>).</param>
  672. <param name="mutationMultiplierGenerator">Random numbers generator used to generate random
  673. factors for multiplication of network's weights and thresholds during genetic mutation
  674. (ses <see cref="F:AForge.Genetic.DoubleArrayChromosome.mutationMultiplierGenerator"/>.)</param>
  675. <param name="mutationAdditionGenerator">Random numbers generator used to generate random
  676. values added to neural network's weights and thresholds during genetic mutation
  677. (see <see cref="F:AForge.Genetic.DoubleArrayChromosome.mutationAdditionGenerator"/>).</param>
  678. <param name="selectionMethod">Method of selection best chromosomes in genetic population.</param>
  679. <param name="crossOverRate">Crossover rate in genetic population (see
  680. <see cref="P:AForge.Genetic.Population.CrossoverRate"/>).</param>
  681. <param name="mutationRate">Mutation rate in genetic population (see
  682. <see cref="P:AForge.Genetic.Population.MutationRate"/>).</param>
  683. <param name="randomSelectionRate">Rate of injection of random chromosomes during selection
  684. in genetic population (see <see cref="P:AForge.Genetic.Population.RandomSelectionPortion"/>).</param>
  685. </member>
  686. <member name="M:AForge.Neuro.Learning.EvolutionaryLearning.#ctor(AForge.Neuro.ActivationNetwork,System.Int32)">
  687. <summary>
  688. Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.EvolutionaryLearning"/> class.
  689. </summary>
  690. <param name="activationNetwork">Activation network to be trained.</param>
  691. <param name="populationSize">Size of genetic population.</param>
  692. <remarks><para>This version of constructor is used to create genetic population
  693. for searching optimal neural network's weight using default set of parameters, which are:
  694. <list type="bullet">
  695. <item>Selection method - elite;</item>
  696. <item>Crossover rate - 0.75;</item>
  697. <item>Mutation rate - 0.25;</item>
  698. <item>Rate of injection of random chromosomes during selection - 0.20;</item>
  699. <item>Random numbers generator for initializing new chromosome -
  700. <c>UniformGenerator( new Range( -1, 1 ) )</c>;</item>
  701. <item>Random numbers generator used during mutation for genes' multiplication -
  702. <c>ExponentialGenerator( 1 )</c>;</item>
  703. <item>Random numbers generator used during mutation for adding random value to genes -
  704. <c>UniformGenerator( new Range( -0.5f, 0.5f ) )</c>.</item>
  705. </list></para>
  706. <para>In order to have full control over the above default parameters, it is possible to
  707. used extended version of constructor, which allows to specify all of the parameters.</para>
  708. </remarks>
  709. </member>
  710. <member name="M:AForge.Neuro.Learning.EvolutionaryLearning.Run(System.Double[],System.Double[])">
  711. <summary>
  712. Runs learning iteration.
  713. </summary>
  714. <param name="input">Input vector.</param>
  715. <param name="output">Desired output vector.</param>
  716. <returns>Returns learning error.</returns>
  717. <remarks><note>The method is not implemented, since evolutionary learning algorithm is global
  718. and requires all inputs/outputs in order to run its one epoch. Use <see cref="M:AForge.Neuro.Learning.EvolutionaryLearning.RunEpoch(System.Double[][],System.Double[][])"/>
  719. method instead.</note></remarks>
  720. <exception cref="T:System.NotImplementedException">The method is not implemented by design.</exception>
  721. </member>
  722. <member name="M:AForge.Neuro.Learning.EvolutionaryLearning.RunEpoch(System.Double[][],System.Double[][])">
  723. <summary>
  724. Runs learning epoch.
  725. </summary>
  726. <param name="input">Array of input vectors.</param>
  727. <param name="output">Array of output vectors.</param>
  728. <returns>Returns summary squared learning error for the entire epoch.</returns>
  729. <remarks><para><note>While running the neural network's learning process, it is required to
  730. pass the same <paramref name="input"/> and <paramref name="output"/> values for each
  731. epoch. On the very first run of the method it will initialize evolutionary fitness
  732. function with the given input/output. So, changing input/output in middle of the learning
  733. process, will break it.</note></para></remarks>
  734. </member>
  735. <member name="T:AForge.Neuro.Learning.ISupervisedLearning">
  736. <summary>
  737. Supervised learning interface.
  738. </summary>
  739. <remarks><para>The interface describes methods, which should be implemented
  740. by all supervised learning algorithms. Supervised learning is such
  741. type of learning algorithms, where system's desired output is known on
  742. the learning stage. So, given sample input values and desired outputs,
  743. system should adopt its internals to produce correct (or close to correct)
  744. result after the learning step is complete.</para></remarks>
  745. </member>
  746. <member name="M:AForge.Neuro.Learning.ISupervisedLearning.Run(System.Double[],System.Double[])">
  747. <summary>
  748. Runs learning iteration.
  749. </summary>
  750. <param name="input">Input vector.</param>
  751. <param name="output">Desired output vector.</param>
  752. <returns>Returns learning error.</returns>
  753. </member>
  754. <member name="M:AForge.Neuro.Learning.ISupervisedLearning.RunEpoch(System.Double[][],System.Double[][])">
  755. <summary>
  756. Runs learning epoch.
  757. </summary>
  758. <param name="input">Array of input vectors.</param>
  759. <param name="output">Array of output vectors.</param>
  760. <returns>Returns sum of learning errors.</returns>
  761. </member>
  762. <member name="T:AForge.Neuro.Learning.IUnsupervisedLearning">
  763. <summary>
  764. Unsupervised learning interface.
  765. </summary>
  766. <remarks><para>The interface describes methods, which should be implemented
  767. by all unsupervised learning algorithms. Unsupervised learning is such
  768. type of learning algorithms, where system's desired output is not known on
  769. the learning stage. Given sample input values, it is expected, that
  770. system will organize itself in the way to find similarities betweed provided
  771. samples.</para></remarks>
  772. </member>
  773. <member name="M:AForge.Neuro.Learning.IUnsupervisedLearning.Run(System.Double[])">
  774. <summary>
  775. Runs learning iteration.
  776. </summary>
  777. <param name="input">Input vector.</param>
  778. <returns>Returns learning error.</returns>
  779. </member>
  780. <member name="M:AForge.Neuro.Learning.IUnsupervisedLearning.RunEpoch(System.Double[][])">
  781. <summary>
  782. Runs learning epoch.
  783. </summary>
  784. <param name="input">Array of input vectors.</param>
  785. <returns>Returns sum of learning errors.</returns>
  786. </member>
  787. <member name="T:AForge.Neuro.Learning.PerceptronLearning">
  788. <summary>
  789. Perceptron learning algorithm.
  790. </summary>
  791. <remarks><para>This learning algorithm is used to train one layer neural
  792. network of <see cref="T:AForge.Neuro.ActivationNeuron">Activation Neurons</see>
  793. with the <see cref="T:AForge.Neuro.ThresholdFunction">Threshold</see>
  794. activation function.</para>
  795. <para>See information about <a href="http://en.wikipedia.org/wiki/Perceptron">Perceptron</a>
  796. and its learning algorithm.</para>
  797. </remarks>
  798. </member>
  799. <member name="P:AForge.Neuro.Learning.PerceptronLearning.LearningRate">
  800. <summary>
  801. Learning rate, [0, 1].
  802. </summary>
  803. <remarks><para>The value determines speed of learning.</para>
  804. <para>Default value equals to <b>0.1</b>.</para>
  805. </remarks>
  806. </member>
  807. <member name="M:AForge.Neuro.Learning.PerceptronLearning.#ctor(AForge.Neuro.ActivationNetwork)">
  808. <summary>
  809. Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.PerceptronLearning"/> class.
  810. </summary>
  811. <param name="network">Network to teach.</param>
  812. <exception cref="T:System.ArgumentException">Invalid nuaral network. It should have one layer only.</exception>
  813. </member>
  814. <member name="M:AForge.Neuro.Learning.PerceptronLearning.Run(System.Double[],System.Double[])">
  815. <summary>
  816. Runs learning iteration.
  817. </summary>
  818. <param name="input">Input vector.</param>
  819. <param name="output">Desired output vector.</param>
  820. <returns>Returns absolute error - difference between current network's output and
  821. desired output.</returns>
  822. <remarks><para>Runs one learning iteration and updates neuron's
  823. weights in the case if neuron's output is not equal to the
  824. desired output.</para></remarks>
  825. </member>
  826. <member name="M:AForge.Neuro.Learning.PerceptronLearning.RunEpoch(System.Double[][],System.Double[][])">
  827. <summary>
  828. Runs learning epoch.
  829. </summary>
  830. <param name="input">Array of input vectors.</param>
  831. <param name="output">Array of output vectors.</param>
  832. <returns>Returns summary learning error for the epoch. See <see cref="M:AForge.Neuro.Learning.PerceptronLearning.Run(System.Double[],System.Double[])"/>
  833. method for details about learning error calculation.</returns>
  834. <remarks><para>The method runs one learning epoch, by calling <see cref="M:AForge.Neuro.Learning.PerceptronLearning.Run(System.Double[],System.Double[])"/> method
  835. for each vector provided in the <paramref name="input"/> array.</para></remarks>
  836. </member>
  837. <member name="T:AForge.Neuro.Learning.ResilientBackpropagationLearning">
  838. <summary>
  839. Resilient Backpropagation learning algorithm.
  840. </summary>
  841. <remarks><para>This class implements the resilient backpropagation (RProp)
  842. learning algorithm. The RProp learning algorithm is one of the fastest learning
  843. algorithms for feed-forward learning networks which use only first-order
  844. information.</para>
  845. <para>Sample usage (training network to calculate XOR function):</para>
  846. <code>
  847. // initialize input and output values
  848. double[][] input = new double[4][] {
  849. new double[] {0, 0}, new double[] {0, 1},
  850. new double[] {1, 0}, new double[] {1, 1}
  851. };
  852. double[][] output = new double[4][] {
  853. new double[] {0}, new double[] {1},
  854. new double[] {1}, new double[] {0}
  855. };
  856. // create neural network
  857. ActivationNetwork network = new ActivationNetwork(
  858. SigmoidFunction( 2 ),
  859. 2, // two inputs in the network
  860. 2, // two neurons in the first layer
  861. 1 ); // one neuron in the second layer
  862. // create teacher
  863. ResilientBackpropagationLearning teacher = new ResilientBackpropagationLearning( network );
  864. // loop
  865. while ( !needToStop )
  866. {
  867. // run epoch of learning procedure
  868. double error = teacher.RunEpoch( input, output );
  869. // check error value to see if we need to stop
  870. // ...
  871. }
  872. </code>
  873. </remarks>
  874. </member>
  875. <member name="P:AForge.Neuro.Learning.ResilientBackpropagationLearning.LearningRate">
  876. <summary>
  877. Learning rate.
  878. </summary>
  879. <remarks><para>The value determines speed of learning.</para>
  880. <para>Default value equals to <b>0.0125</b>.</para>
  881. </remarks>
  882. </member>
  883. <member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.#ctor(AForge.Neuro.ActivationNetwork)">
  884. <summary>
  885. Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.ResilientBackpropagationLearning"/> class.
  886. </summary>
  887. <param name="network">Network to teach.</param>
  888. </member>
  889. <member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.Run(System.Double[],System.Double[])">
  890. <summary>
  891. Runs learning iteration.
  892. </summary>
  893. <param name="input">Input vector.</param>
  894. <param name="output">Desired output vector.</param>
  895. <returns>Returns squared error (difference between current network's output and
  896. desired output) divided by 2.</returns>
  897. <remarks><para>Runs one learning iteration and updates neuron's
  898. weights.</para></remarks>
  899. </member>
  900. <member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.RunEpoch(System.Double[][],System.Double[][])">
  901. <summary>
  902. Runs learning epoch.
  903. </summary>
  904. <param name="input">Array of input vectors.</param>
  905. <param name="output">Array of output vectors.</param>
  906. <returns>Returns summary learning error for the epoch. See <see cref="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.Run(System.Double[],System.Double[])"/>
  907. method for details about learning error calculation.</returns>
  908. <remarks><para>The method runs one learning epoch, by calling <see cref="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.Run(System.Double[],System.Double[])"/> method
  909. for each vector provided in the <paramref name="input"/> array.</para></remarks>
  910. </member>
  911. <member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.ResetGradient">
  912. <summary>
  913. Resets current weight and threshold derivatives.
  914. </summary>
  915. </member>
  916. <member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.ResetUpdates(System.Double)">
  917. <summary>
  918. Resets the current update steps using the given learning rate.
  919. </summary>
  920. </member>
  921. <member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.UpdateNetwork">
  922. <summary>
  923. Update network's weights.
  924. </summary>
  925. </member>
  926. <member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.CalculateError(System.Double[])">
  927. <summary>
  928. Calculates error values for all neurons of the network.
  929. </summary>
  930. <param name="desiredOutput">Desired output vector.</param>
  931. <returns>Returns summary squared error of the last layer divided by 2.</returns>
  932. </member>
  933. <member name="M:AForge.Neuro.Learning.ResilientBackpropagationLearning.CalculateGradient(System.Double[])">
  934. <summary>
  935. Calculate weights updates
  936. </summary>
  937. <param name="input">Network's input vector.</param>
  938. </member>
  939. <member name="T:AForge.Neuro.Learning.SOMLearning">
  940. <summary>
  941. Kohonen Self Organizing Map (SOM) learning algorithm.
  942. </summary>
  943. <remarks><para>This class implements Kohonen's SOM learning algorithm and
  944. is widely used in clusterization tasks. The class allows to train
  945. <see cref="T:AForge.Neuro.DistanceNetwork">Distance Networks</see>.</para>
  946. <para>Sample usage (clustering RGB colors):</para>
  947. <code>
  948. // set range for randomization neurons' weights
  949. Neuron.RandRange = new Range( 0, 255 );
  950. // create network
  951. DistanceNetwork network = new DistanceNetwork(
  952. 3, // thress inputs in the network
  953. 100 * 100 ); // 10000 neurons
  954. // create learning algorithm
  955. SOMLearning trainer = new SOMLearning( network );
  956. // network's input
  957. double[] input = new double[3];
  958. // loop
  959. while ( !needToStop )
  960. {
  961. input[0] = rand.Next( 256 );
  962. input[1] = rand.Next( 256 );
  963. input[2] = rand.Next( 256 );
  964. trainer.Run( input );
  965. // ...
  966. // update learning rate and radius continuously,
  967. // so networks may come steady state
  968. }
  969. </code>
  970. </remarks>
  971. </member>
  972. <member name="P:AForge.Neuro.Learning.SOMLearning.LearningRate">
  973. <summary>
  974. Learning rate, [0, 1].
  975. </summary>
  976. <remarks><para>Determines speed of learning.</para>
  977. <para>Default value equals to <b>0.1</b>.</para>
  978. </remarks>
  979. </member>
  980. <member name="P:AForge.Neuro.Learning.SOMLearning.LearningRadius">
  981. <summary>
  982. Learning radius.
  983. </summary>
  984. <remarks><para>Determines the amount of neurons to be updated around
  985. winner neuron. Neurons, which are in the circle of specified radius,
  986. are updated during the learning procedure. Neurons, which are closer
  987. to the winner neuron, get more update.</para>
  988. <para><note>In the case if learning rate is set to 0, then only winner
  989. neuron's weights are updated.</note></para>
  990. <para>Default value equals to <b>7</b>.</para>
  991. </remarks>
  992. </member>
  993. <member name="M:AForge.Neuro.Learning.SOMLearning.#ctor(AForge.Neuro.DistanceNetwork)">
  994. <summary>
  995. Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.SOMLearning"/> class.
  996. </summary>
  997. <param name="network">Neural network to train.</param>
  998. <remarks><para>This constructor supposes that a square network will be passed for training -
  999. it should be possible to get square root of network's neurons amount.</para></remarks>
  1000. <exception cref="T:System.ArgumentException">Invalid network size - square network is expected.</exception>
  1001. </member>
  1002. <member name="M:AForge.Neuro.Learning.SOMLearning.#ctor(AForge.Neuro.DistanceNetwork,System.Int32,System.Int32)">
  1003. <summary>
  1004. Initializes a new instance of the <see cref="T:AForge.Neuro.Learning.SOMLearning"/> class.
  1005. </summary>
  1006. <param name="network">Neural network to train.</param>
  1007. <param name="width">Neural network's width.</param>
  1008. <param name="height">Neural network's height.</param>
  1009. <remarks>The constructor allows to pass network of arbitrary rectangular shape.
  1010. The amount of neurons in the network should be equal to <b>width</b> * <b>height</b>.
  1011. </remarks>
  1012. <exception cref="T:System.ArgumentException">Invalid network size - network size does not correspond
  1013. to specified width and height.</exception>
  1014. </member>
  1015. <member name="M:AForge.Neuro.Learning.SOMLearning.Run(System.Double[])">
  1016. <summary>
  1017. Runs learning iteration.
  1018. </summary>
  1019. <param name="input">Input vector.</param>
  1020. <returns>Returns learning error - summary absolute difference between neurons' weights
  1021. and appropriate inputs. The difference is measured according to the neurons
  1022. distance to the winner neuron.</returns>
  1023. <remarks><para>The method runs one learning iterations - finds winner neuron (the neuron
  1024. which has weights with values closest to the specified input vector) and updates its weight
  1025. (as well as weights of neighbor neurons) in the way to decrease difference with the specified
  1026. input vector.</para></remarks>
  1027. </member>
  1028. <member name="M:AForge.Neuro.Learning.SOMLearning.RunEpoch(System.Double[][])">
  1029. <summary>
  1030. Runs learning epoch.
  1031. </summary>
  1032. <param name="input">Array of input vectors.</param>
  1033. <returns>Returns summary learning error for the epoch. See <see cref="M:AForge.Neuro.Learning.SOMLearning.Run(System.Double[])"/>
  1034. method for details about learning error calculation.</returns>
  1035. <remarks><para>The method runs one learning epoch, by calling <see cref="M:AForge.Neuro.Learning.SOMLearning.Run(System.Double[])"/> method
  1036. for each vector provided in the <paramref name="input"/> array.</para></remarks>
  1037. </member>
  1038. <member name="T:AForge.Neuro.ActivationNetwork">
  1039. <summary>
  1040. Activation network.
  1041. </summary>
  1042. <remarks><para>Activation network is a base for multi-layer neural network
  1043. with activation functions. It consists of <see cref="T:AForge.Neuro.ActivationLayer">activation
  1044. layers</see>.</para>
  1045. <para>Sample usage:</para>
  1046. <code>
  1047. // create activation network
  1048. ActivationNetwork network = new ActivationNetwork(
  1049. new SigmoidFunction( ), // sigmoid activation function
  1050. 3, // 3 inputs
  1051. 4, 1 ); // 2 layers:
  1052. // 4 neurons in the firs layer
  1053. // 1 neuron in the second layer
  1054. </code>
  1055. </remarks>
  1056. </member>
  1057. <member name="M:AForge.Neuro.ActivationNetwork.#ctor(AForge.Neuro.IActivationFunction,System.Int32,System.Int32[])">
  1058. <summary>
  1059. Initializes a new instance of the <see cref="T:AForge.Neuro.ActivationNetwork"/> class.
  1060. </summary>
  1061. <param name="function">Activation function of neurons of the network.</param>
  1062. <param name="inputsCount">Network's inputs count.</param>
  1063. <param name="neuronsCount">Array, which specifies the amount of neurons in
  1064. each layer of the neural network.</param>
  1065. <remarks>The new network is randomized (see <see cref="M:AForge.Neuro.ActivationNeuron.Randomize"/>
  1066. method) after it is created.</remarks>
  1067. </member>
  1068. <member name="M:AForge.Neuro.ActivationNetwork.SetActivationFunction(AForge.Neuro.IActivationFunction)">
  1069. <summary>
  1070. Set new activation function for all neurons of the network.
  1071. </summary>
  1072. <param name="function">Activation function to set.</param>
  1073. <remarks><para>The method sets new activation function for all neurons by calling
  1074. <see cref="M:AForge.Neuro.ActivationLayer.SetActivationFunction(AForge.Neuro.IActivationFunction)"/> method for each layer of the network.</para></remarks>
  1075. </member>
  1076. <member name="T:AForge.Neuro.DistanceNetwork">
  1077. <summary>
  1078. Distance network.
  1079. </summary>
  1080. <remarks>Distance network is a neural network of only one <see cref="T:AForge.Neuro.DistanceLayer">distance
  1081. layer</see>. The network is a base for such neural networks as SOM, Elastic net, etc.
  1082. </remarks>
  1083. </member>
  1084. <member name="M:AForge.Neuro.DistanceNetwork.#ctor(System.Int32,System.Int32)">
  1085. <summary>
  1086. Initializes a new instance of the <see cref="T:AForge.Neuro.DistanceNetwork"/> class.
  1087. </summary>
  1088. <param name="inputsCount">Network's inputs count.</param>
  1089. <param name="neuronsCount">Network's neurons count.</param>
  1090. <remarks>The new network is randomized (see <see cref="M:AForge.Neuro.Neuron.Randomize"/>
  1091. method) after it is created.</remarks>
  1092. </member>
  1093. <member name="M:AForge.Neuro.DistanceNetwork.GetWinner">
  1094. <summary>
  1095. Get winner neuron.
  1096. </summary>
  1097. <returns>Index of the winner neuron.</returns>
  1098. <remarks>The method returns index of the neuron, which weights have
  1099. the minimum distance from network's input.</remarks>
  1100. </member>
  1101. <member name="T:AForge.Neuro.Network">
  1102. <summary>
  1103. Base neural network class.
  1104. </summary>
  1105. <remarks>This is a base neural netwok class, which represents
  1106. collection of neuron's layers.</remarks>
  1107. </member>
  1108. <member name="F:AForge.Neuro.Network.inputsCount">
  1109. <summary>
  1110. Network's inputs count.
  1111. </summary>
  1112. </member>
  1113. <member name="F:AForge.Neuro.Network.layersCount">
  1114. <summary>
  1115. Network's layers count.
  1116. </summary>
  1117. </member>
  1118. <member name="F:AForge.Neuro.Network.layers">
  1119. <summary>
  1120. Network's layers.
  1121. </summary>
  1122. </member>
  1123. <member name="F:AForge.Neuro.Network.output">
  1124. <summary>
  1125. Network's output vector.
  1126. </summary>
  1127. </member>
  1128. <member name="P:AForge.Neuro.Network.InputsCount">
  1129. <summary>
  1130. Network's inputs count.
  1131. </summary>
  1132. </member>
  1133. <member name="P:AForge.Neuro.Network.Layers">
  1134. <summary>
  1135. Network's layers.
  1136. </summary>
  1137. </member>
  1138. <member name="P:AForge.Neuro.Network.Output">
  1139. <summary>
  1140. Network's output vector.
  1141. </summary>
  1142. <remarks><para>The calculation way of network's output vector is determined by
  1143. layers, which comprise the network.</para>
  1144. <para><note>The property is not initialized (equals to <see langword="null"/>) until
  1145. <see cref="M:AForge.Neuro.Network.Compute(System.Double[])"/> method is called.</note></para>
  1146. </remarks>
  1147. </member>
  1148. <member name="M:AForge.Neuro.Network.#ctor(System.Int32,System.Int32)">
  1149. <summary>
  1150. Initializes a new instance of the <see cref="T:AForge.Neuro.Network"/> class.
  1151. </summary>
  1152. <param name="inputsCount">Network's inputs count.</param>
  1153. <param name="layersCount">Network's layers count.</param>
  1154. <remarks>Protected constructor, which initializes <see cref="F:AForge.Neuro.Network.inputsCount"/>,
  1155. <see cref="F:AForge.Neuro.Network.layersCount"/> and <see cref="F:AForge.Neuro.Network.layers"/> members.</remarks>
  1156. </member>
  1157. <member name="M:AForge.Neuro.Network.Compute(System.Double[])">
  1158. <summary>
  1159. Compute output vector of the network.
  1160. </summary>
  1161. <param name="input">Input vector.</param>
  1162. <returns>Returns network's output vector.</returns>
  1163. <remarks><para>The actual network's output vecor is determined by layers,
  1164. which comprise the layer - represents an output vector of the last layer
  1165. of the network. The output vector is also stored in <see cref="P:AForge.Neuro.Network.Output"/> property.</para>
  1166. <para><note>The method may be called safely from multiple threads to compute network's
  1167. output value for the specified input values. However, the value of
  1168. <see cref="P:AForge.Neuro.Network.Output"/> property in multi-threaded environment is not predictable,
  1169. since it may hold network's output computed from any of the caller threads. Multi-threaded
  1170. access to the method is useful in those cases when it is required to improve performance
  1171. by utilizing several threads and the computation is based on the immediate return value
  1172. of the method, but not on network's output property.</note></para>
  1173. </remarks>
  1174. </member>
  1175. <member name="M:AForge.Neuro.Network.Randomize">
  1176. <summary>
  1177. Randomize layers of the network.
  1178. </summary>
  1179. <remarks>Randomizes network's layers by calling <see cref="M:AForge.Neuro.Layer.Randomize"/> method
  1180. of each layer.</remarks>
  1181. </member>
  1182. <member name="M:AForge.Neuro.Network.Save(System.String)">
  1183. <summary>
  1184. Save network to specified file.
  1185. </summary>
  1186. <param name="fileName">File name to save network into.</param>
  1187. <remarks><para>The neural network is saved using .NET serialization (binary formatter is used).</para></remarks>
  1188. </member>
  1189. <member name="M:AForge.Neuro.Network.Save(System.IO.Stream)">
  1190. <summary>
  1191. Save network to specified file.
  1192. </summary>
  1193. <param name="stream">Stream to save network into.</param>
  1194. <remarks><para>The neural network is saved using .NET serialization (binary formatter is used).</para></remarks>
  1195. </member>
  1196. <member name="M:AForge.Neuro.Network.Load(System.String)">
  1197. <summary>
  1198. Load network from specified file.
  1199. </summary>
  1200. <param name="fileName">File name to load network from.</param>
  1201. <returns>Returns instance of <see cref="T:AForge.Neuro.Network"/> class with all properties initialized from file.</returns>
  1202. <remarks><para>Neural network is loaded from file using .NET serialization (binary formater is used).</para></remarks>
  1203. </member>
  1204. <member name="M:AForge.Neuro.Network.Load(System.IO.Stream)">
  1205. <summary>
  1206. Load network from specified file.
  1207. </summary>
  1208. <param name="stream">Stream to load network from.</param>
  1209. <returns>Returns instance of <see cref="T:AForge.Neuro.Network"/> class with all properties initialized from file.</returns>
  1210. <remarks><para>Neural network is loaded from file using .NET serialization (binary formater is used).</para></remarks>
  1211. </member>
  1212. <member name="T:AForge.Neuro.ActivationNeuron">
  1213. <summary>
  1214. Activation neuron.
  1215. </summary>
  1216. <remarks><para>Activation neuron computes weighted sum of its inputs, adds
  1217. threshold value and then applies <see cref="P:AForge.Neuro.ActivationNeuron.ActivationFunction">activation function</see>.
  1218. The neuron isusually used in multi-layer neural networks.</para></remarks>
  1219. <seealso cref="T:AForge.Neuro.IActivationFunction"/>
  1220. </member>
  1221. <member name="F:AForge.Neuro.ActivationNeuron.threshold">
  1222. <summary>
  1223. Threshold value.
  1224. </summary>
  1225. <remarks>The value is added to inputs weighted sum before it is passed to activation
  1226. function.</remarks>
  1227. </member>
  1228. <member name="F:AForge.Neuro.ActivationNeuron.function">
  1229. <summary>
  1230. Activation function.
  1231. </summary>
  1232. <remarks>The function is applied to inputs weighted sum plus
  1233. threshold value.</remarks>
  1234. </member>
  1235. <member name="P:AForge.Neuro.ActivationNeuron.Threshold">
  1236. <summary>
  1237. Threshold value.
  1238. </summary>
  1239. <remarks>The value is added to inputs weighted sum before it is passed to activation
  1240. function.</remarks>
  1241. </member>
  1242. <member name="P:AForge.Neuro.ActivationNeuron.ActivationFunction">
  1243. <summary>
  1244. Neuron's activation function.
  1245. </summary>
  1246. </member>
  1247. <member name="M:AForge.Neuro.ActivationNeuron.#ctor(System.Int32,AForge.Neuro.IActivationFunction)">
  1248. <summary>
  1249. Initializes a new instance of the <see cref="T:AForge.Neuro.ActivationNeuron"/> class.
  1250. </summary>
  1251. <param name="inputs">Neuron's inputs count.</param>
  1252. <param name="function">Neuron's activation function.</param>
  1253. </member>
  1254. <member name="M:AForge.Neuro.ActivationNeuron.Randomize">
  1255. <summary>
  1256. Randomize neuron.
  1257. </summary>
  1258. <remarks>Calls base class <see cref="M:AForge.Neuro.Neuron.Randomize">Randomize</see> method
  1259. to randomize neuron's weights and then randomizes threshold's value.</remarks>
  1260. </member>
  1261. <member name="M:AForge.Neuro.ActivationNeuron.Compute(System.Double[])">
  1262. <summary>
  1263. Computes output value of neuron.
  1264. </summary>
  1265. <param name="input">Input vector.</param>
  1266. <returns>Returns neuron's output value.</returns>
  1267. <remarks><para>The output value of activation neuron is equal to value
  1268. of nueron's activation function, which parameter is weighted sum
  1269. of its inputs plus threshold value. The output value is also stored
  1270. in <see cref="P:AForge.Neuro.Neuron.Output">Output</see> property.</para>
  1271. <para><note>The method may be called safely from multiple threads to compute neuron's
  1272. output value for the specified input values. However, the value of
  1273. <see cref="P:AForge.Neuro.Neuron.Output"/> property in multi-threaded environment is not predictable,
  1274. since it may hold neuron's output computed from any of the caller threads. Multi-threaded
  1275. access to the method is useful in those cases when it is required to improve performance
  1276. by utilizing several threads and the computation is based on the immediate return value
  1277. of the method, but not on neuron's output property.</note></para>
  1278. </remarks>
  1279. <exception cref="T:System.ArgumentException">Wrong length of the input vector, which is not
  1280. equal to the <see cref="P:AForge.Neuro.Neuron.InputsCount">expected value</see>.</exception>
  1281. </member>
  1282. <member name="T:AForge.Neuro.DistanceNeuron">
  1283. <summary>
  1284. Distance neuron.
  1285. </summary>
  1286. <remarks><para>Distance neuron computes its output as distance between
  1287. its weights and inputs - sum of absolute differences between weights'
  1288. values and corresponding inputs' values. The neuron is usually used in Kohonen
  1289. Self Organizing Map.</para></remarks>
  1290. </member>
  1291. <member name="M:AForge.Neuro.DistanceNeuron.#ctor(System.Int32)">
  1292. <summary>
  1293. Initializes a new instance of the <see cref="T:AForge.Neuro.DistanceNeuron"/> class.
  1294. </summary>
  1295. <param name="inputs">Neuron's inputs count.</param>
  1296. </member>
  1297. <member name="M:AForge.Neuro.DistanceNeuron.Compute(System.Double[])">
  1298. <summary>
  1299. Computes output value of neuron.
  1300. </summary>
  1301. <param name="input">Input vector.</param>
  1302. <returns>Returns neuron's output value.</returns>
  1303. <remarks><para>The output value of distance neuron is equal to the distance
  1304. between its weights and inputs - sum of absolute differences.
  1305. The output value is also stored in <see cref="P:AForge.Neuro.Neuron.Output">Output</see>
  1306. property.</para>
  1307. <para><note>The method may be called safely from multiple threads to compute neuron's
  1308. output value for the specified input values. However, the value of
  1309. <see cref="P:AForge.Neuro.Neuron.Output"/> property in multi-threaded environment is not predictable,
  1310. since it may hold neuron's output computed from any of the caller threads. Multi-threaded
  1311. access to the method is useful in those cases when it is required to improve performance
  1312. by utilizing several threads and the computation is based on the immediate return value
  1313. of the method, but not on neuron's output property.</note></para>
  1314. </remarks>
  1315. <exception cref="T:System.ArgumentException">Wrong length of the input vector, which is not
  1316. equal to the <see cref="P:AForge.Neuro.Neuron.InputsCount">expected value</see>.</exception>
  1317. </member>
  1318. <member name="T:AForge.Neuro.Neuron">
  1319. <summary>
  1320. Base neuron class.
  1321. </summary>
  1322. <remarks>This is a base neuron class, which encapsulates such
  1323. common properties, like neuron's input, output and weights.</remarks>
  1324. </member>
  1325. <member name="F:AForge.Neuro.Neuron.inputsCount">
  1326. <summary>
  1327. Neuron's inputs count.
  1328. </summary>
  1329. </member>
  1330. <member name="F:AForge.Neuro.Neuron.weights">
  1331. <summary>
  1332. Nouron's wieghts.
  1333. </summary>
  1334. </member>
  1335. <member name="F:AForge.Neuro.Neuron.output">
  1336. <summary>
  1337. Neuron's output value.
  1338. </summary>
  1339. </member>
  1340. <member name="F:AForge.Neuro.Neuron.rand">
  1341. <summary>
  1342. Random number generator.
  1343. </summary>
  1344. <remarks>The generator is used for neuron's weights randomization.</remarks>
  1345. </member>
  1346. <member name="F:AForge.Neuro.Neuron.randRange">
  1347. <summary>
  1348. Random generator range.
  1349. </summary>
  1350. <remarks>Sets the range of random generator. Affects initial values of neuron's weight.
  1351. Default value is [0, 1].</remarks>
  1352. </member>
  1353. <member name="P:AForge.Neuro.Neuron.RandGenerator">
  1354. <summary>
  1355. Random number generator.
  1356. </summary>
  1357. <remarks>The property allows to initialize random generator with a custom seed. The generator is
  1358. used for neuron's weights randomization.</remarks>
  1359. </member>
  1360. <member name="P:AForge.Neuro.Neuron.RandRange">
  1361. <summary>
  1362. Random generator range.
  1363. </summary>
  1364. <remarks>Sets the range of random generator. Affects initial values of neuron's weight.
  1365. Default value is [0, 1].</remarks>
  1366. </member>
  1367. <member name="P:AForge.Neuro.Neuron.InputsCount">
  1368. <summary>
  1369. Neuron's inputs count.
  1370. </summary>
  1371. </member>
  1372. <member name="P:AForge.Neuro.Neuron.Output">
  1373. <summary>
  1374. Neuron's output value.
  1375. </summary>
  1376. <remarks>The calculation way of neuron's output value is determined by inherited class.</remarks>
  1377. </member>
  1378. <member name="P:AForge.Neuro.Neuron.Weights">
  1379. <summary>
  1380. Neuron's weights.
  1381. </summary>
  1382. </member>
  1383. <member name="M:AForge.Neuro.Neuron.#ctor(System.Int32)">
  1384. <summary>
  1385. Initializes a new instance of the <see cref="T:AForge.Neuro.Neuron"/> class.
  1386. </summary>
  1387. <param name="inputs">Neuron's inputs count.</param>
  1388. <remarks>The new neuron will be randomized (see <see cref="M:AForge.Neuro.Neuron.Randomize"/> method)
  1389. after it is created.</remarks>
  1390. </member>
  1391. <member name="M:AForge.Neuro.Neuron.Randomize">
  1392. <summary>
  1393. Randomize neuron.
  1394. </summary>
  1395. <remarks>Initialize neuron's weights with random values within the range specified
  1396. by <see cref="P:AForge.Neuro.Neuron.RandRange"/>.</remarks>
  1397. </member>
  1398. <member name="M:AForge.Neuro.Neuron.Compute(System.Double[])">
  1399. <summary>
  1400. Computes output value of neuron.
  1401. </summary>
  1402. <param name="input">Input vector.</param>
  1403. <returns>Returns neuron's output value.</returns>
  1404. <remarks>The actual neuron's output value is determined by inherited class.
  1405. The output value is also stored in <see cref="P:AForge.Neuro.Neuron.Output"/> property.</remarks>
  1406. </member>
  1407. </members>
  1408. </doc>