Download this source code for
5 USD


Download this source code for
5 USD


Download this source code for
5 USD


Download this source code for
5 USD

eneural_net

eneural.net / dart is an ai library for efficient artificial neural networks. the library is portable (native, js/web, flutter) and the computation is capable to use simd (single instruction multiple data) to improve performance.

usage

import 'package:eneural_net/eneural_net.dart';
import 'package:eneural_net/eneural_net_extensions.dart';

void main() {
  // type of scale to use to compute the ann:
  var scale = scaledouble.zero_to_one;

  // the samples to learn in float32x4 data type:
  var samples = samplefloat32x4.tolistfromstring(
    [
      '0,0=0',
      '1,0=1',
      '0,1=1',
      '1,1=0',
    ],
    scale,
    true, // already normalized in the scale.
  );

  var samplesset = samplesset(samples, subject: 'xor');

  // the activation function to use in the ann:
  var activationfunction = activationfunctionsigmoid();

  // the ann using layers that can compute with float32x4 (simd compatible type).
  var ann = ann(
    scale,
    // input layer: 2 neurons with linear activation function:
    layerfloat32x4(2, true, activationfunctionlinear()),
    // 1 hidden layer: 3 neurons with sigmoid activation function:
    [hiddenlayerconfig(3, true, activationfunction)],
    // output layer: 1 neuron with sigmoid activation function:
    layerfloat32x4(1, false, activationfunction),
  );

  print(ann);

  // training algorithm:
  var backpropagation = backpropagation(ann, samplesset);

  print(backpropagation);

  print('n---------------------------------------------------');

  var chronometer = chronometer('backpropagation').start();

  // train the ann using backpropagation until global error 0.01,
  // with max epochs per training session of 1000000 and
  // a max retry of 10 when a training session can't reach
  // the target global error:
  var achievedtargeterror = backpropagation.trainuntilglobalerror(
          targetglobalerror: 0.01, maxepochs: 50000, maxretries: 10);

  chronometer.stop(operations: backpropagation.totaltrainingactivations);

  print('---------------------------------------------------n');

  // compute the current global error of the ann:
  var globalerror = ann.computesamplesglobalerror(samples);

  print('samples outputs:');
  for (var i = 0; i < samples.length; ++i) {
    var sample = samples[i];

    var input = sample.input;
    var expected = sample.output;

    // activate the sample input:
    ann.activate(input);

    // the current output of the ann (after activation):
    var output = ann.output;

    print('- $i> $input -> $output ($expected) > error: ${output - expected}');
  }

  print('nglobalerror: $globalerror');
  print('achievedtargeterror: $achievedtargeterrorn');

  print(chronometer);
}

output:

ann<double, float32x4, signalfloat32x4, scale<double>>{ layers: 2+ -> [3+] -> 1 ; scaledouble{0.0 .. 1.0}  ; activationfunctionsigmoid }
backpropagation<double, float32x4, signalfloat32x4, scale<double>, samplefloat32x4>{name: backpropagation}

---------------------------------------------------
backpropagation> [info] started backpropagation training session "xor". { samples: 4 ; targetglobalerror: 0.01 }
backpropagation> [info] selected initial ann from poll of size 100, executing 600 epochs. lowest error: 0.2451509315860858 (0.2479563313068569)
backpropagation> [info] (ok) reached target error in 2317 epochs (107 ms). final error: 0.009992250436771877 <= 0.01
---------------------------------------------------

samples outputs:
- 0> [0, 0] -> [0.11514352262020111] ([0]) > error: [0.11514352262020111]
- 1> [1, 0] -> [0.9083549976348877] ([1]) > error: [-0.0916450023651123]
- 2> [0, 1] -> [0.9032943248748779] ([1]) > error: [-0.09670567512512207]
- 3> [1, 1] -> [0.09465821087360382] ([0]) > error: [0.09465821087360382]

globalerror: 0.009992250436771877
achievedtargeterror: true

backpropagation{elapsedtime: 111 ms, hertz: 83495.49549549549 hz, ops: 9268, starttime: 2021-05-26 06:25:34.825383, stoptime: 2021-05-26 06:25:34.936802}

simd (single instruction multiple data)

dart has support for simd when computation is made using float32x4 and int32x4.
the activation functions are implemented using float32x4, improving
performance by 1.5x to 2x, when compared to normal implementation.

the basic principle with simd is to execute math operations simultaneously in 4 numbers.

float32x4 is a lane of 4 double (32 bits single precision floating points).
example of multiplication:

  var fs1 = float32x4( 1.1 , 2.2 , 3.3  , 4.4  );
  var fs2 = float32x4( 10  , 100 , 1000 , 1000 );
  
  var fs3 = fs1 * fs2 ;
  
  print(fs3);
  // output:
  // [11.000000, 220.000000, 3300.000000, 4400.000000]

see “dart:typed_data library” and “using simd in dart“.

signal

the class signal represents the collection of numbers (including its related operations)
that will flow through the ann, representing the actual signal that
an artificial neural network should compute.

the main implementation is signalfloat32x4 and represents
an ann signal based in float32x4. all the operations prioritizes the use of simd.

the framework of signal allows the implementation of any kind of data
to represent the numbers and operations of an [eneural.net] ann. signalint32x4
is an experimental implementation to exercise an ann based in integers.

activation functions

activationfunction is the base class for ann neurons activation functions:

  • activationfunctionsigmoid:

    the classic sigmoid function (return for x a value between 0.0 and 1.0):

    activation(double x) {
      return 1 / (1 + exp(-x)) ;
    }
    
  • activationfunctionsigmoidfast:

    fast approximation version of sigmoid function, that is not based in exp(x):

    activation(double x) {
      x *= 3 ;
      return 0.5 + ((x) / (2.5 + x.abs()) / 2) ;
    }
    

    function author: graciliano m. passos: [[email protected]][github].

  • activationfunctionsigmoidboundedfast:

    fast approximation version of sigmoid function, that is not based in exp(x),
    bounded to a lower and upper limit for [x].

    activation(double x) {
      if (x < lowerlimit) {
        return 0.0 ;
      } else if (x > upperlimit) {
        return 1.0 ;
      }
      x = x / scale ;
      return 0.5 + (x / (1 + (x * x))) ;
    }
    

    function author: graciliano m. passos: [[email protected]][github].

exp(x)

exp is the function of the natural exponent,
e, to the power x.

this is an important ann function, since is used by the popular
sigmoid function, and usually a high precision version is slow
and approximation versions can be used for most ann models and training
algorithms.

fast math

an internal fast math library is present and can be used for platforms
that are not efficient to compute exp (exponential function).

you can import this library and use it to create a specialized
activationfunction implementation or use it in any kind of project:

import 'package:eneural_net/eneural_net_fast_math.dart' as fast_math ;

void main() {
  // fast exponential function:
  var o = fast_math.exp(2);

  // fast exponential function with high precision:
  var highprecision = <double>[0.0 , 0.0];
  var ohighprecision = fast_math.exphighprecision(2, 0.0, highprecision);
  
  // fast exponential function with simd acceleration:
  var o32x4 = fast_math.expfloat32x4( float32x4(2,3,4,5) );
}

the implementation is based in the dart package complex:

the fast_math.expfloat32x4 function was created by graciliano m. passos ([[email protected]][github]).


Download this source code for
5 USD


Download this source code for
5 USD


Download this source code for
5 USD


Download this source code for
5 USD

Comments are closed.