ZhennanQin commented on a change in pull request #13697: [MKLDNN] Enable signed 
int8 support for convolution.
URL: https://github.com/apache/incubator-mxnet/pull/13697#discussion_r251813404
 
 

 ##########
 File path: python/mxnet/contrib/quantization.py
 ##########
 @@ -80,8 +80,7 @@ def _quantize_params(qsym, params, th_dict):
                 quantized_params[name] = ndarray.array([th_dict[output][1]])
     return quantized_params
 
-def _quantize_symbol(sym, excluded_symbols=None, offline_params=None,
-                     quantized_dtype='int8', calib_quantize_op=False):
+def _quantize_symbol(sym, excluded_symbols=None, offline_params=None, 
quantized_dtype='int8'):
 
 Review comment:
   The reason is, this API is used for both gpu and cpu, while gpu supports 
`int8` only. Another script is for mkldnn only, which can use `auto` as default.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to