sxjscience commented on issue #17703: Need to use safe accumulation for 
calculating the gradient of Embedding + Take
URL: 
https://github.com/apache/incubator-mxnet/issues/17703#issuecomment-591828363
 
 
   The simplest fix is to revise the kernel with safe accumulation, which means 
to cast float16 to float32 before accumulating. Also, I suggest that we should 
turn on  `MXNET_SAFE_ACCUMULATION` for float16 type in 1.7 (change the default 
behavior) so that float16 is accumulated via float32.
   
   I think we should use the following approach for summing up a sequence of 
float16 numbers:
   1) load them as `half2`,
   2) cast `half2` to `float2`
   3) accumulate the numbers

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to