Mxnet1.3.0 supports tensorrt for inference, but I do not find any tutorials or 
examples to show how to inference a model in FP16 or Int8 mode. So is there a 
way to do it? 

[ Full content available at: 
https://github.com/apache/incubator-mxnet/issues/12575 ]
This message was relayed via gitbox.apache.org for [email protected]

Reply via email to