Hello MXNet community, Keras users can now use the high-performance MXNet deep learning engine for the distributed training of convolutional neural networks (CNNs) and recurrent neural networks (RNNs). With an update of a few lines of code, Keras developers can increase training speed by using MXNet's multi-GPU distributed training capabilities. Saving an MXNet model is another valuable feature of the release. You can design in Keras, train with Keras-MXNet, and run inference in production, at-scale with MXNet.
>From our initial benchmarks, CNNs with Keras-MXNet is up to 3X faster on GPUs compared to the default backend. See the benchmark module <https://github.com/awslabs/keras-apache-mxnet/tree/master/benchmark> for more details. RNN support in this release is experimental with few known issues/unsupported functionalities. See using RNN with Keras-MXNet limitations and workarounds doc <https://github.com/awslabs/keras-apache-mxnet/blob/master/docs/mxnet_backend/using_rnn_with_mxnet_backend.md> for more details. See Release Notes <https://github.com/awslabs/keras-apache-mxnet/releases/tag/v2.1.6> for more details on unsupported operators and known issues. We will continue our efforts in the future releases to close the gaps. Thank you for all the contributors - Lai Wei <https://github.com/roywei>, Karan Jariwala <https://github.com/karan6181/>, Jiajie Chen <https://github.com/jiajiechen>, Kalyanee Chendke <https://github.com/kalyc>, Junyuan Xie <https://github.com/piiswrong> We welcome your contributions - https://github.com/awslabs/keras-apache-mxnet. Here is the issue with the list of operators to be implemented. Do check it out and create a PR - https://github.com/awslabs/keras-apache-mxnet/issues/18 Best, Sandeep
