KellenSunderland opened a new issue #14684: When setting 
MXNET_CUDA_TENSOR_OP_MATH_ALLOW_CONVERSION no speedup observed
URL: https://github.com/apache/incubator-mxnet/issues/14684
 
 
   Doing a quick debugging session on why I don't see any speedup when enabling 
MXNET_CUDA_TENSOR_OP_MATH_ALLOW_CONVERSION.
   
   So far interesting observations:
   Enabling CuDNN API logging and observing forward calls I see repeated 
examples of these calls with default math type.  Debugging I have verified we 
do set correct mathtype during cudnn conv setup here: 
https://github.com/apache/incubator-mxnet/blob/master/src/operator/nn/cudnn/cudnn_convolution-inl.h#L588
   
   but when running a quick inference sample from the resnet50v2 zoo I get many 
of these outputs:  
   ```
   I! CuDNN (v7301) function cudnnConvolutionForward() called:
   i!     handle: type=cudnnHandle_t; streamId=0x7feef87f4260;
   i!     alpha: type=CUDNN_DATA_FLOAT; val=1.000000;
   i!     xDesc: type=cudnnTensorDescriptor_t:
   i!         dataType: type=cudnnDataType_t; val=CUDNN_DATA_FLOAT (0);
   i!         nbDims: type=int; val=4;
   i!         dimA: type=int; val=[32,512,15,20];
   i!         strideA: type=int; val=[153600,300,20,1];
   i!     xData: location=dev; addr=0x7fed72000000;
   i!     wDesc: type=cudnnFilterDescriptor_t:
   i!         dataType: type=cudnnDataType_t; val=CUDNN_DATA_FLOAT (0);
   i!         vect: type=int; val=0;
   i!         nbDims: type=int; val=4;
   i!         dimA: type=int; val=[2048,512,1,1];
   i!         format: type=cudnnTensorFormat_t; val=CUDNN_TENSOR_NCHW (0);
   i!     wData: location=dev; addr=0x7fee01c00000;
   i!     convDesc: type=cudnnConvolutionDescriptor_t:
   i!         mode: type=cudnnConvolutionMode_t; val=CUDNN_CROSS_CORRELATION 
(1);
   i!         dataType: type=cudnnDataType_t; val=CUDNN_DATA_FLOAT (0);
   i!         mathType: type=cudnnMathType_t; val=CUDNN_DEFAULT_MATH (0);
   i!         arrayLength: type=int; val=2;
   i!         padA: type=int; val=[0,0];
   i!         strideA: type=int; val=[1,1];
   i!         dilationA: type=int; val=[1,1];
   i!         groupCount: type=int; val=1;
   i!     algo: type=cudnnConvolutionFwdAlgo_t; 
val=CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_PRECOMP_GEMM (1);
   i!     workSpace: location=dev; addr=0x7fed76000000;
   i!     workSpaceSizeInBytes: type=size_t; val=1808;
   i!     beta: type=CUDNN_DATA_FLOAT; val=0.000000;
   i!     yDesc: type=cudnnTensorDescriptor_t:
   i!         dataType: type=cudnnDataType_t; val=CUDNN_DATA_FLOAT (0);
   i!         nbDims: type=int; val=4;
   i!         dimA: type=int; val=[32,2048,15,20];
   i!         strideA: type=int; val=[614400,300,20,1];
   i!     yData: location=dev; addr=0x7fed8c000000;
   i! Time: 2019-04-12T10:46:34.090769 (0d+0h+0m+9s since start)
   i! Process=7357; Thread=7411; GPU=0; Handle=0x7feeaea83090; 
StreamId=0x7feef87f4260.
   ```
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to