Click or drag to resize

Deep Neural Network

This topic contains the following sections:

DeepNeuralNetwork inherits the multi-layer perceptron topology but uses different training procedure. The DNN model training consists of two steps:

  • Initial individual hidden layers unsupervised pre-training;

  • Whole model tuning using iRPROP+ back-propagation variant.

NB. The train termination condition applies to the each (pre) train process separately. I.e. if the train procedure must stops after 1000 iteration, and the network consists of 5 layers, the total 1000 * ((5-2) /*hidden layers number*/ + 1 /*all model train*/) train iterations will be performed. Note the unsupervised pre-training typically converges mush faster and requires fewer computations than the whole model optimization.

Implementation

Next most important methods and properties are featured in the class:

Code sample

C#
1var modelDnn = new DeepNeuralNetwork();
2watch.Reset();
3watch.Start();
4modelDnn.Train(trainObs, trainCl, new int[] { trainObs.Columns, 2 * trainObs.Columns + uniqueClassesNumber, trainObs.Columns + uniqueClassesNumber, uniqueClassesNumber });
5watch.Stop();
6Estimate("DNN", watch.Elapsed, modelDnn, testObs, testCl, gridData);

See Also