Rainweic commented on issue #17164: net.Cast("float16") doesn't work: Check 
failed: (*in_type)[i] == dtype_param (2 vs. 0) : This layer requires uniform 
type. Expected 'float32' v.s. given 'float16' at 'gamma'
URL: 
https://github.com/apache/incubator-mxnet/issues/17164#issuecomment-569760768
 
 
   > I'm having a similar issue, but I guess this explains why the cast loses 
efficacy for BatchNorm layer?
   > 
https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/nn/basic_layers.py#L359-L362
   > 
   > ```
   > class BatchNorm(HybridBlock):
   >     ....
   >     def cast(self, dtype):
   >         if np.dtype(dtype).name == 'float16':
   >             dtype = 'float32'
   >         super(BatchNorm, self).cast(dtype)
   > ```
   > 
   > so 'gamma' is still in float32, while the input is in float16, which 
causes the check failure.
   
   姐 他为啥这么设置啊 我都懵了 这不让BN转fp16? 我该咋训练和转化。。。

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to