[GitHub] cjolivier01 commented on a change in pull request #10078: Support float16 in L2Normalization operator

2018-03-12 Thread GitBox
cjolivier01 commented on a change in pull request #10078: Support float16 in 
L2Normalization operator
URL: https://github.com/apache/incubator-mxnet/pull/10078#discussion_r174012691
 
 

 ##
 File path: src/operator/l2_normalization.cc
 ##
 @@ -26,13 +26,22 @@
 namespace mxnet {
 namespace op {
 template<>
-Operator* CreateOp(L2NormalizationParam param) {
-  return new L2NormalizationOp(param);
+Operator* CreateOp(L2NormalizationParam param, int dtype) {
 
 Review comment:
   is it done this way elsewhere?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #10078: Support float16 in L2Normalization operator

2018-03-12 Thread GitBox
cjolivier01 commented on a change in pull request #10078: Support float16 in 
L2Normalization operator
URL: https://github.com/apache/incubator-mxnet/pull/10078#discussion_r173945220
 
 

 ##
 File path: src/operator/l2_normalization.cc
 ##
 @@ -26,13 +26,18 @@
 namespace mxnet {
 namespace op {
 template<>
-Operator* CreateOp(L2NormalizationParam param) {
-  return new L2NormalizationOp(param);
+Operator* CreateOp(L2NormalizationParam param, int dtype) {
+  Operator* op = NULL;
+  MSHADOW_REAL_TYPE_SWITCH(dtype, DType, {
+op = new L2NormalizationOp(param);
+  });
+  return op;
 }
 
 // DO_BIND_DISPATCH comes from static_operator_common.h
-Operator* L2NormalizationProp::CreateOperator(Context ctx) const {
-  DO_BIND_DISPATCH(CreateOp, param_);
+Operator* L2NormalizationProp::CreateOperatorEx(Context ctx, 
std::vector *in_shape,
+std::vector *in_type) 
const {
+  DO_BIND_DISPATCH(CreateOp, param_, in_type->at(0));
 
 Review comment:
   I am not saying you need to change it, but if that were the case, you 
wouldn;t have to override CreateOpEx(), which has nontrivial logic.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #10078: Support float16 in L2Normalization operator

2018-03-12 Thread GitBox
cjolivier01 commented on a change in pull request #10078: Support float16 in 
L2Normalization operator
URL: https://github.com/apache/incubator-mxnet/pull/10078#discussion_r173944782
 
 

 ##
 File path: src/operator/l2_normalization.cc
 ##
 @@ -26,13 +26,18 @@
 namespace mxnet {
 namespace op {
 template<>
-Operator* CreateOp(L2NormalizationParam param) {
-  return new L2NormalizationOp(param);
+Operator* CreateOp(L2NormalizationParam param, int dtype) {
+  Operator* op = NULL;
+  MSHADOW_REAL_TYPE_SWITCH(dtype, DType, {
+op = new L2NormalizationOp(param);
+  });
+  return op;
 }
 
 // DO_BIND_DISPATCH comes from static_operator_common.h
-Operator* L2NormalizationProp::CreateOperator(Context ctx) const {
-  DO_BIND_DISPATCH(CreateOp, param_);
+Operator* L2NormalizationProp::CreateOperatorEx(Context ctx, 
std::vector *in_shape,
+std::vector *in_type) 
const {
+  DO_BIND_DISPATCH(CreateOp, param_, in_type->at(0));
 
 Review comment:
   Just FYI, usually, DType is determined within the Forward() and Backward() 
functions using the type switch from the actual input blob at runtime.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #10078: Support float16 in L2Normalization operator

2018-03-12 Thread GitBox
cjolivier01 commented on a change in pull request #10078: Support float16 in 
L2Normalization operator
URL: https://github.com/apache/incubator-mxnet/pull/10078#discussion_r173941304
 
 

 ##
 File path: src/operator/l2_normalization.cc
 ##
 @@ -26,13 +26,18 @@
 namespace mxnet {
 namespace op {
 template<>
-Operator* CreateOp(L2NormalizationParam param) {
-  return new L2NormalizationOp(param);
+Operator* CreateOp(L2NormalizationParam param, int dtype) {
+  Operator* op = NULL;
+  MSHADOW_REAL_TYPE_SWITCH(dtype, DType, {
+op = new L2NormalizationOp(param);
+  });
+  return op;
 }
 
 // DO_BIND_DISPATCH comes from static_operator_common.h
-Operator* L2NormalizationProp::CreateOperator(Context ctx) const {
-  DO_BIND_DISPATCH(CreateOp, param_);
+Operator* L2NormalizationProp::CreateOperatorEx(Context ctx, 
std::vector *in_shape,
+std::vector *in_type) 
const {
+  DO_BIND_DISPATCH(CreateOp, param_, in_type->at(0));
 
 Review comment:
   Since you're overriding CreateOperatorEx(), then what ends up calling 
InferShape(), InferType(), which is normally done by the base class' 
CreateOperatorEx()?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #10078: Support float16 in L2Normalization operator

2018-03-12 Thread GitBox
cjolivier01 commented on a change in pull request #10078: Support float16 in 
L2Normalization operator
URL: https://github.com/apache/incubator-mxnet/pull/10078#discussion_r173941034
 
 

 ##
 File path: src/operator/l2_normalization-inl.h
 ##
 @@ -294,7 +321,13 @@ class L2NormalizationProp : public OperatorProperty {
 return {ResourceRequest::kTempSpace};
   }
 
-  Operator* CreateOperator(Context ctx) const override;
+  Operator* CreateOperator(Context ctx) const override {
 
 Review comment:
   Ok, I see it is masked by your override of CreateOperatorEx()
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #10078: Support float16 in L2Normalization operator

2018-03-12 Thread GitBox
cjolivier01 commented on a change in pull request #10078: Support float16 in 
L2Normalization operator
URL: https://github.com/apache/incubator-mxnet/pull/10078#discussion_r173936443
 
 

 ##
 File path: src/operator/l2_normalization-inl.h
 ##
 @@ -294,7 +321,13 @@ class L2NormalizationProp : public OperatorProperty {
 return {ResourceRequest::kTempSpace};
   }
 
-  Operator* CreateOperator(Context ctx) const override;
+  Operator* CreateOperator(Context ctx) const override {
 
 Review comment:
   Does something still call this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services