qqaatw opened a new issue #14708: Variable-length LSTM in a mini-batch URL: https://github.com/apache/incubator-mxnet/issues/14708 Environment: 1. Python 3.6.3 2. Mxnet-cu92 3. Cuda 9.2 with cuDNN I'm now proceeding a text processing networks, and have variable length inputs (time-steps) in a batch, I have already padded zeros if the sequence length is not enough to the max sequence length. Is there a way to mask padded zeros and avoid it being updated? p.s I saw [this](https://github.com/apache/incubator-mxnet/pull/3338) PR before, but I'm not sure that padded zeros would be updated or not.
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
