**RNN related data, including both accuracy, and performance/benchmarking.**
**Accuracy**
1. **_A GNMT model_** implemented by gluon-nlp (scripts\nmt\train_gnmt.py), 
IWMT2015 dataset, en-vi translation. The decoder-encoder is a 2-layer LSTM, per 
the model implemenation, as gluon.rnncell used, the MKLDNN FC can be covered as 
it is gluon.rnncell is an unfused kernel, below figure is the ppl trends 
collected on both GPU and CPU, with same hyper-parameters, the two curves 
aligned very well.
![image](https://user-images.githubusercontent.com/33112206/46126432-d4a40200-c25f-11e8-8d03-8f0cfcd9712c.png)



[ Full content available at: 
https://github.com/apache/incubator-mxnet/pull/12591 ]
This message was relayed via gitbox.apache.org for [email protected]

Reply via email to