ArmageddonKnight opened a new pull request #18228:
URL: https://github.com/apache/incubator-mxnet/pull/18228


   ## Description ##
   
   This PR improves the backward mirroring implementation. Specifically, it 
takes into account for each (group of) operator node whether doing backward 
mirroring can be truly benefitial to the total memory footprint (please refer 
to test case #1 and #2 below). It also considers the data dependencies between 
the forward node and its corresponding gradient node. This is because it is 
possible for the feature maps of a layer to be recomputed without recomputing 
the layer itself (e.g., the Fully-Connected layer, test case #3). Those 
improvements allow us to further optimize the memory consumption of our DNN 
training models.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Backward Mirroring Improvements
   - Test Case #1: RNN Cell
     - In the following graphs, we will be using red arrows to denote backward 
dependencies (a.k.a. feature maps), which usually are the main contributor of 
the memory consumption.
     - In the example below that mimics an RNN cell, we shall NOT be doing 
backward mirroring, because otherwise the total amount of feature maps storage 
will be doubled.
   
   <img height="200pt" 
src="https://user-images.githubusercontent.com/19616653/80934632-23e4d980-8d97-11ea-8d01-b7d31d1c2e92.png"/>
 <img height="200pt" 
src="https://user-images.githubusercontent.com/19616653/80934684-4545c580-8d97-11ea-93de-4634738914d4.png"/>
   
   - Test Case #2: MLP Attention
     - In the example below that mimics the MLP attention (a.k.a. additive 
attention), we shall be doing backward mirroring, because it can help reduce 
the total maount of feature maps storage from O(T^2 N) to O(2T N).
   
   <img height="200pt" 
src="https://user-images.githubusercontent.com/19616653/80936274-6e1d8900-8d9e-11ea-88fc-3f91b6e86b21.png";
 /> <img height="200pt" 
src="https://user-images.githubusercontent.com/19616653/80936292-842b4980-8d9e-11ea-8509-a2883bb8b51e.png";
 />
   
   - Test Case #3: Fully-Connected Layer
     - In the example below that uses a red node to denote a compute-heavy 
layer whose gradients are not dependent on the output data entries (e.g., the 
Fully-Connected layer). The red node can also be put on the mirror path. This 
enables us to relieve the backward dependency on its feature maps (i.e., 
inputs) without incurring significant performance overhead.
   
   <img height="100pt" 
src="https://user-images.githubusercontent.com/19616653/80936468-4f6bc200-8d9f-11ea-9b66-8b94c4c99645.png";
 /> <img height="100pt" 
src="https://user-images.githubusercontent.com/19616653/80936471-54c90c80-8d9f-11ea-8d82-f7870a29ff37.png";
 />
   
   ## Comments ##
   
   FYI, @eric-haibin-lin 
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to