Lrn2: a framework for learning representations from data
The latest version of the python lrn2 package for representation learning is 2.0, released on 21st of May 2015.
Changed since v1.0:
- Full reimplementation of all model classes.
- Neural Network layers and stacks can be constructed in a building-block manner (via mix-in classes).
- Several layers types available (and easy to extend), including convolutional layers, several different unit types, cost functions, regularizations and training methods (for example, a convolutional RBM model with GOH sparsity regularization and dropout can be created by simply by mixing the respective layer-classes)
- Separation of model creation and model training
- Separation of model architecture and model parameters (possible to load identical parameters in different architectures)
- Unified and optimized training methods (e.g. same basic method for training RBMs and FFNNs, saves current state at interrupt and resumes automatically)
- It is now possible to provide data batch by batch via callback functions (for large datasets)
- Still easy to use with configuration files to specify the architecture and training (meta-)parameters.
- Added specification for configuration files, to ensure well-formedness and to be able to define default values.
- Extended documentation, all important classes and methods are fully documented now.