ZhennanQin commented on a change in pull request #13697: [MKLDNN] Enable signed 
int8 support for convolution.
URL: https://github.com/apache/incubator-mxnet/pull/13697#discussion_r244628274
 
 

 ##########
 File path: example/quantization/imagenet_gen_qsym_mkldnn.py
 ##########
 @@ -140,8 +140,8 @@ def save_params(fname, arg_params, aux_params, 
logger=None):
                              ' thresholds. This mode is expected to produce 
the best inference accuracy of all three'
                              ' kinds of quantized models if the calibration 
dataset is representative enough of the'
                              ' inference dataset.')
-    parser.add_argument('--quantized-dtype', type=str, default='uint8',
-                        choices=['int8', 'uint8'],
+    parser.add_argument('--quantized-dtype', type=str, default='auto',
 
 Review comment:
   @KellenSunderland Deep understanding of MKLDNNquantization is always 
welcome, which will help to use it better eventually. For your question, the 
answer is simple. MKLDNN transform data from fp32 to uint8 with scale and 
**zero shift**, which means, any negative value will be cut off to zero, it's 
equivalent to relu + quantize. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to