PawelGlomski-Intel opened a new pull request #20983: URL: https://github.com/apache/incubator-mxnet/pull/20983
### Changes ### - [ ] Fix `asnumpy()` for bfloat16 dtype - [ ] Relaxed some AMP assumptions, added more checks - [ ] Fixed an issue with ops from [`LP16_FP32_FUNCS`](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/amp/lists/symbol_bf16.py#L36) list, for which order of visited nodes during graph pass could affect whether an op is converted (to low precision) or not. - [ ] Enabled bf16 input for calibrated quantize_v2 ops (removes need for `amp_cast` before `quantize_v2` for quantized models) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
