RuRo commented on issue #17734: [MXNET-889] Implement ONNX export for gluon LSTM. URL: https://github.com/apache/incubator-mxnet/pull/17734#issuecomment-597648893 @anirudhacharya I haven't implemented any unittests yet, but the code **is** tested. I've provided an example test case in my original post, using ONNXRuntime as a reference implementation. I've looked at how the unittests are implemented for other operators. The `backend_test.py` export test won't work in this case. 1) The current backend unittests don't actually work for nodes, which can **only** be exported from MXNet to ONNX. That is because the current 'strategy' to check exports is to [first import ONNX -> MXNet and then re-export it back to ONNX](https://github.com/apache/incubator-mxnet/blob/713d962359cfe5b014ca24c0f9b3676d86b28569/tests/python-pytest/onnx/backend.py#L104). Since we don't have an implementation for importing LSTMs from ONNX to MXNet, the *export*-only tests will fail, because the *import* for the operator LSTM is not implemented. Implementing an ONNX->MXNet conversion for the LSTM node is a non-trivial task. 2) This PR only implements a limited subset of the ONNX supported LSTM configurations and only a limited subset of MXNet supported RNN configurations. And currently, this conversion only supports Gluon LSTM (not arbitrary MXNet RNNs), so even if we could import the LSTM node from ONNX, the current implementation would fail on unsupported configurations. See "Comments:RNN" section in my original post. I could add a test case to [test_node.py export_test_cases](https://github.com/apache/incubator-mxnet/blob/713d962359cfe5b014ca24c0f9b3676d86b28569/tests/python-pytest/onnx/test_node.py#L290), but this test only verifies, that a valid onnx graph is exported, not that the conversion is correct. Any kind of test, that involves actually forwarding the LSTM node will **require** implementing the import from ONNX (to use the official test) or using some reference implementation (an LSTM implementation written in numpy or onnxruntime etc).
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
