reminisce commented on issue #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model 
Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#issuecomment-360964981
 
 
   @marcoabreu The optimal values are determined by the calibration datasets. 
So they are independent of platforms. So long as the platform supports int8 
basic addition and multiplication, it would be able to run quantized models. We 
would of course need to write dedicated int8 operators for a specific platform. 
The current implementation only works on Nvidia GPUs with dp4a instruction.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to