[GitHub] [incubator-mxnet] zixuanweeei commented on issue #15745: Memory layout in the LSTM operator

2019-08-05 Thread GitBox
zixuanweeei commented on issue #15745: Memory layout in the LSTM operator URL: https://github.com/apache/incubator-mxnet/issues/15745#issuecomment-518461277 Feel free to directly mention me here if there is any question . BTW, we are working on integrating the LBR-GRU of MKL-DNN

[GitHub] [incubator-mxnet] zixuanweeei commented on issue #15745: Memory layout in the LSTM operator

2019-08-05 Thread GitBox
zixuanweeei commented on issue #15745: Memory layout in the LSTM operator URL: https://github.com/apache/incubator-mxnet/issues/15745#issuecomment-518454755 Is there any problem with the order? The native LSTM implementation of MXNet shares the same order of gates with that of MKL-DNN,

[GitHub] [incubator-mxnet] zixuanweeei commented on issue #15745: Memory layout in the LSTM operator

2019-08-03 Thread GitBox
zixuanweeei commented on issue #15745: Memory layout in the LSTM operator URL: https://github.com/apache/incubator-mxnet/issues/15745#issuecomment-517971666 @eloi-loomai The memory layout of weights is: ``` L * H * ngates * H L * H * ngates * H L * nbias * H