KellenSunderland commented on a change in pull request #13697: [MKLDNN] Enable 
signed int8 support for convolution.
URL: https://github.com/apache/incubator-mxnet/pull/13697#discussion_r244646725
 
 

 ##########
 File path: example/quantization/imagenet_gen_qsym_mkldnn.py
 ##########
 @@ -140,8 +140,8 @@ def save_params(fname, arg_params, aux_params, 
logger=None):
                              ' thresholds. This mode is expected to produce 
the best inference accuracy of all three'
                              ' kinds of quantized models if the calibration 
dataset is representative enough of the'
                              ' inference dataset.')
-    parser.add_argument('--quantized-dtype', type=str, default='uint8',
-                        choices=['int8', 'uint8'],
+    parser.add_argument('--quantized-dtype', type=str, default='auto',
 
 Review comment:
   Thanks a ton for the explanation @ZhennanQin.  I can see how from the users' 
point of view it makes sense to describe the dtype usage that way.  Am I 
correct in understanding that if you wanted to strictly use uint8 for resnets 
for example you could first quantize the input to uint8 and then not ignore the 
first convolution (because your inputs would already by positive?).  You'd have 
to understand your proper scale and zero-shift for input in this case.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to