ptrendx commented on a change in pull request #17265: Add bfloat16 
floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r365376274
 
 

 ##########
 File path: python/mxnet/contrib/amp/amp.py
 ##########
 @@ -43,14 +44,17 @@
 from ... import optimizer as opt
 from .loss_scaler import LossScaler
 
+bfloat16 = np.dtype([('bfloat16', np.uint16)])
 
 Review comment:
   Can we have this dtype accessible (as `mx.bfloat16` or something similar)?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to