Skip to content
Maxim Milakov edited this page Dec 16, 2016 · 1 revision

What is nnForge?

nnForge is a framework for training convolutional and fully-connected neural networks. It includes CPU and GPU (CUDA) backends. The schema of the networks is DAG (directed acyclic graph).

Layers

The framework has a number of layers defined:

  • Convolutional
  • Batch Normalization
  • Sparse (in feature map dimension) convolutional
  • Fully connected
  • Local contrast subtractive
  • RGB->YUV conversion
  • Max subsampling
  • Average subsampling
  • Upsampling
  • Rectification - |x|
  • Hyperbolic tangent
  • Sigmoid
  • ReLU
  • Parametric ReLU
  • Softmax
  • Dropout
  • Concat
  • Reshape
  • Add (element-wise)

Error functions

Error functions available:

  • L-Error
  • Negative Log-Likelihood
  • Cross-Entropy
  • Accuracy (top-n) (cannot be source of error for backward propagation)

Training

The library implements mini-batch Stochastic Gradient Descent training algorithm with optional momentum (vanilla or Nesterov), ADAM is supported as well.

Regularization

  • Weight decay
  • Dropout (as a separate layer)

Multi-GPU

  • The framework can utilize multiple GPUs both for training and inference
  • Training is parallelized with data parallel approach, where mini-batch is split across multiple GPUs
  • Single node only
  • NCCL is supported

License

nnForge is an open-source software distributed under the Apache License v2.0.

Download

Download the latest version. Access all the releases along with release notes on GitHub.

The package contains nnForge framework as well as examples - applications using the framework.

Prerequisites

The framework depends on Boost, OpenCV, and Protobuf.

If you want to use CUDA backend you will also need CUDA Toolkit installed and cuDNN v4 or newer installed.

Authors

nnForge is designed and implemented by Maxim Milakov.