reminisce commented on issue #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model 
Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#issuecomment-360964778
 
 
   @jinhuang415 
   1. The parameters are quantized offline, which means the min/max values were 
pre-calculated before inference.
   2. In theory, if the calibration dataset is representative enough of the 
real inference image sets, more examples used for calibration should lead to 
less accuracy loss. The purpose of using entropy calibration is to keep the 
accuracy loss stable with respect to the number of examples used for 
calibration. The naive calibration approach suffers from more calibration 
examples leads to bigger accuracy loss as you can see the trend in the last two 
tables. My guess is that if the calibration dataset contains examples that are 
not similar to real inference images, the quantization thresholds might be 
biased by those examples and result in a little drop down of accuracy.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to