DickJC123 commented on a change in pull request #13749: Add NHWC layout support 
to Pooling (cpu, gpu cuda, gpu cuDNN)
URL: https://github.com/apache/incubator-mxnet/pull/13749#discussion_r256230399
 
 

 ##########
 File path: src/operator/nn/pooling.cc
 ##########
 @@ -421,11 +463,16 @@ NNVM_REGISTER_OP(_backward_Pooling)
 .set_attr<nnvm::FInplaceOption>(
     "FInplaceOption",
     [](const NodeAttrs &attrs) {
-#if MXNET_USE_CUDNN == 1
-  return std::vector<std::pair<int, int> >();
-#else
-  return std::vector<std::pair<int, int> >{{1, 0}};
+#if MXNET_USE_MKLDNN == 1 && MXNET_USE_CUDA == 0 && MXNET_USE_CUDNN == 0
 
 Review comment:
   I originally set out to create a test case (and separate issue) to 
demonstrate the need for the  restrictive #if clause I put around the MKLDNN 
FInplaceOption.  I was unable to do so and in the process I came to realize 
that I misunderstood the meaning behind the {1,0} option.  In fact, it allows 
the framework to share the backward '1' input (the unused workspace gradient) 
with the backward '0' output (the pooling input gradient) where the additional 
MKLDNN workspace is present.  This sharing is compatible with CUDA and cuDNN 
Pooling implementations, which do not use a workspace or its gradient!  I'm 
therefore reverting back to a simple code guard around the MKLDNN 
FinplaceOption- one whose logic mirrors the other MKLDNN/no-MKLDNN switches of 
behavior within nn/pooling.cc.  No new issue is required.  With my last commit 
I've also expanded the pooling tests to cover cases that have the inplace 
tensor sharing, which requires the pooling input to be the same size as the 
output (and also then the workspace).

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to