zixuanweeei opened a new pull request #18028: [MKLDNN] Support quantized rnn 
towards v1.6.x
URL: https://github.com/apache/incubator-mxnet/pull/18028
 
 
   ## Description ##
   **Mirror PR of #18001, towards v1.6.x branch**. In this PR, we add support 
of quantization flow of the rnn operator. Currently, only the LSTM mode 
supports INT8 inference.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Add _contrib_quantized_rnn op.
   - [x] Add asymmetric quantization - _contrib_quantized_asym op, to quantize 
FP32 data to U8 data using scale and shift.
   - [x] Add MXNET_USE_WEIGHT_CACHE to control rnn init behavior.
   - [x] Support data layout in NDArrayIter. Specifically, NDArrayIter supports 
only `NCHW` layout by default, and there is no way to support other layouts, 
like sequential `TNC` layout. This PR makes some changes to NDArrayIter to 
leverage the feature (assuming that N represents the batch).
   - [x] Move MKLDNNRnnMemMgr to individual layer.
   
   ##Comments##
   * Performance results will be added to #18001.
   
   @ciyongch @TaoLv @pengzhao-intel 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to