ZhennanQin commented on a change in pull request #13697: [MKLDNN] Enable signed 
int8 support for convolution.
URL: https://github.com/apache/incubator-mxnet/pull/13697#discussion_r253257538
 
 

 ##########
 File path: python/mxnet/contrib/quantization.py
 ##########
 @@ -286,12 +283,12 @@ def _get_optimal_threshold(arr, num_bins=8001, 
num_quantized_bins=255):
     min_val = np.min(arr)
     max_val = np.max(arr)
     th = max(abs(min_val), abs(max_val))
+    if min_val >= 0:
+        num_quantized_bins = (num_quantized_bins // 2) * 4 + 1
 
 Review comment:
   For the case min_val > 0, unit8 should be used for quantized data, thus we 
should double the num_quantized_bins to make it suitable of unit8 range. This 
is the reason that entropy may produce even worse result than naive method, 
which is actually using 7bit in uint8.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to