anko-intel commented on code in PR #21002:
URL: https://github.com/apache/incubator-mxnet/pull/21002#discussion_r918884081
##########
src/operator/nn/dnnl/dnnl_fully_connected-inl.h:
##########
@@ -100,7 +94,7 @@ class FCInputIndex {
const bool has_bias = !full_param.default_param.no_bias;
const bool quantized = dnnl_param.quantized;
const bool sum_input_quantized =
- quantized && dnnl_param.with_sum && !dnnl_param.enable_float_output;
+ quantized && dnnl_param.with_sum &&
!dnnl_param.enabled_float_output.has_value();
Review Comment:
is it enough ? shoud it be ?
( dnnl_param.enabled_float_output.has_value() &&
dnnl_param.enabled_float_output.value == mshadow::kFloat32)
##########
3rdparty/mshadow/mshadow/base.h:
##########
@@ -769,6 +769,10 @@ namespace isnan_typed {
MSHADOW_XINLINE bool IsNan(volatile mshadow::half::half_t val) {
return (val.half_ & (~MSHADOW_HALF_SIGN_BIT)) > MSHADOW_HALF_EXPONENT_BITS;
}
+ template <>
+ MSHADOW_XINLINE bool IsNan(volatile mshadow::bfloat::bf16_t val) {
+ return (val.bf16_ & (~MSHADOW_BF16_SIGN_BIT)) > MSHADOW_BF16_EXPONENT_BITS;
Review Comment:
I am wonder if conversion to float and using isnan() could be faster?
probably no, as I am not awere of ony CPU instruction checkin against NaN
##########
3rdparty/mshadow/mshadow/base.h:
##########
@@ -409,7 +409,7 @@ struct DataType<half::half_t> {
#endif
#endif
};
-template<>
+template <>
Review Comment:
we can add space between "template" and <> in a whole file to be consistent
##########
src/common/utils.h:
##########
@@ -929,11 +929,8 @@ inline mxnet::TShape CanonicalizeAxes(const mxnet::TShape&
src) {
}
inline bool is_float(const int dtype) {
Review Comment:
maybe we can chang the name to is_floating to emphase that it not aswer for
particular type like float but common floating point group of data
representation
```suggestion
inline bool is_floating(const int dtype) {
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]