anirudh2290 commented on issue #14584: Conversion from FP32 to Mixed Precision Models URL: https://github.com/apache/incubator-mxnet/issues/14584#issuecomment-487545716 @AnaRhisT94 yes you can do that if you want to run your entire model in FP16 precision. You can do that by casting your inputs to FP16 and casting your params to FP16. You can look at the FP16 tutorial on how to do this: http://mxnet.incubator.apache.org/versions/master/faq/float16.html?highlight=mixed#using-the-gluon-api . This particular AMP feature would help for situations where you want to run specific layers in FP16 while others like softmax in FP32 and also want to be able to select which layers to run in FP16 versus FP32.
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
