OneRaynyDay commented on a change in pull request #11833: [MXNET-688] Fix 
quantization divide by zero errors
URL: https://github.com/apache/incubator-mxnet/pull/11833#discussion_r203913618
 
 

 ##########
 File path: python/mxnet/contrib/quantization.py
 ##########
 @@ -303,32 +307,34 @@ def _get_optimal_threshold(arr, num_bins=8001, 
num_quantized_bins=255):
         right_outlier_count = np.sum(hist[p_bin_idx_stop:])
         p[-1] += right_outlier_count
         # is_nonzeros[k] indicates whether hist[k] is nonzero
-        is_nonzeros = (sliced_nd_hist != 0).astype(np.int32)
+        is_nonzeros = (p != 0).astype(np.int32)
 
         # calculate how many bins should be merged to generate quantized 
distribution q
-        num_merged_bins = p.size // num_quantized_bins
+        num_merged_bins = sliced_nd_hist.size // num_quantized_bins
         # merge hist into num_quantized_bins bins
         for j in range(num_quantized_bins):
             start = j * num_merged_bins
             stop = start + num_merged_bins
             quantized_bins[j] = sliced_nd_hist[start:stop].sum()
         quantized_bins[-1] += sliced_nd_hist[num_quantized_bins * 
num_merged_bins:].sum()
         # expand quantized_bins into p.size bins
-        q = np.zeros(p.size, dtype=np.float32)
+        q = np.zeros(sliced_nd_hist.size, dtype=np.float32)
         for j in range(num_quantized_bins):
             start = j * num_merged_bins
             if j == num_quantized_bins - 1:
-                stop = -1
+                stop = len(is_nonzeros)
             else:
                 stop = start + num_merged_bins
             norm = is_nonzeros[start:stop].sum()
             if norm != 0:
-                q[start:stop] = float(quantized_bins[j]) / float(norm)
-        q[sliced_nd_hist == 0] = 0
+                q[start:stop] = float(quantized_bins[j]) / 
float(num_quantized_bins)
         p = _smooth_distribution(p)
-        q = _smooth_distribution(q)
+        # There is a chance that q is a completely zero'd probability 
distribution.
+        try:
+            q = _smooth_distribution(q)
+        except ValueError:
+            divergence[i - num_half_quantized_bins] = float("inf")
 
 Review comment:
   If the distribution is improper, we set the KL divergence to infinity, as it 
could theoretically model a uniform distribution of parameters `[a,b]` with 
either variables unbounded, which means KL divergence is infinity.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to