Rainweic commented on issue #17164: net.Cast("float16") doesn't work: Check 
failed: (*in_type)[i] == dtype_param (2 vs. 0) : This layer requires uniform 
type. Expected 'float32' v.s. given 'float16' at 'gamma'
URL: 
https://github.com/apache/incubator-mxnet/issues/17164#issuecomment-570436634
 
 
   > This error actually seems to come from the fact that the `SymbolBlock` 
does not know really what symbols are inside it, and so it casts all of the 
parameters in the `net.cast('float16')` call. However, BatchNorm layer is 
slightly special in that it actually requires its parameters `gamma` and `beta` 
to be `float32` even if input is `float16` in order to not lose precision. So 
the BatchNorm layer expects `gamma` to be `float32`, but the parameter given to 
it is in `float16`.
   > 
   > @Rainweic I would recommend looking into AMP for training: 
https://mxnet.apache.org/api/python/docs/tutorials/performance/backend/amp.html
   > 
   > @zhreshold FYI
   
   
   Thank you! Let me try 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to