zixuanweeei commented on issue #15745: Memory layout in the LSTM operator URL: https://github.com/apache/incubator-mxnet/issues/15745#issuecomment-517971666 @eloi-loomai The memory layout of weights is: ``` L * H * ngates * H L * H * ngates * H L * nbias * H +----------------------+----------------------+-----------------+ workptr weight_iter_n bias_n others weight_layer_n ``` So it should be `DType* bias_n = weight_iter_n + L * H * ngates * H;`. And it should be noticed that the [LSTM formula of MXNet](https://mxnet.incubator.apache.org/api/python/gluon/rnn.html#mxnet.gluon.rnn.LSTMCell) differs from [that of MKL-DNN](https://intel.github.io/mkl-dnn/dev_guide_rnn.html). MXNet has two parts of biases in each gate of RNN variants, while MKL-DNN only has a single bias, except for the bias of current memory content of LBR-GRU.
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services