pengzhao-intel commented on issue #13749: Add NHWC layout support to Pooling 
(cpu, gpu cuda, gpu cuDNN)
URL: https://github.com/apache/incubator-mxnet/pull/13749#issuecomment-458566007
 
 
   @DickJC123 It's great to see the improvements and thank you to cover the 
MKLDNN backend as well :)
   Do you have the performance data to show how much benefit from the new data 
format?
   
   I am very open to see the more data format are supported by MXNet to max the 
perforamnce on different HW. Regarding the memory format switching, AFAIK, 
several approaches can be used as below.
   - OP level switch, as this PR. The advantage is we can control the format in 
the fine level and maybe get the best performance with tuning; however, it will 
increase the complexity for the user to transpose the format for each OP.
   - Model level switch, like Keras, the format can be specified by .json file 
with `"image_data_format": "channels_last"/"channels_first"`. The advantage is 
that it is quick easy for the user to switch the data format and the model can 
keep the constants for the different format.
   - Mix OP and Model level switch
   
   Currently, the TF fully supports NHWC and partially for NCHW, like what we 
are doing for MXNet now. And TF is going to resolve the legacy issue by 
switching transparently.
   https://www.tensorflow.org/guide/performance/overview#data_formats
   
   > The brief history of these two formats is that TensorFlow started by using 
NHWC because it was a little faster on CPUs. In the long term, we are working 
on tools to auto rewrite graphs to make switching between the formats 
transparent and take advantages of micro optimizations where a GPU op may be 
faster using NHWC than the normally most efficient NCHW.
   
   For a short term, I am fine with this PR and I don't want to block the 
improvement.
   For a long term, could you share your plan and big picture to the community 
for the supports of two data format? Maybe a RFC or design proposal is prefer 
to send @dev.
   
   Thanks a lot and feel free to correct me :)
    
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to