apeforest opened a new pull request #12290: [MXNET-798] Fix the dtype cast from 
non float32 in Gradient computation
URL: https://github.com/apache/incubator-mxnet/pull/12290
 
 
   ## Description ##
   In imperative mode, the gradient computation for multiout operators would 
fail when the dtype is not equal to float32 and one of the outputs is 
dont-care. The rootcause of this problem is a zeros operator would be 
automatically derived in the nnvm::Graph where the default dtype (float32) is 
used in the zeros operator.
   
   This change fix the problem by inferring dtype from other inputs in the 
graph when the operator is a zero operator.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [X] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [X] Changes are complete (i.e. I finished coding on this PR)
   - [X] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [X] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [X] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [X] Change the way to infer type for auto-derived zero operator in 
nnvm::Graph
   - [X] Added a unittest for operators with multioutput.
   
   ## Comments ##
   - This seems to be a general problem for all multioutput operators when 
computing gradient in imperative mode. A simple example is copied from the 
original issue below:
   - Although the change is small, the impact could be large. Thus thorough 
review is solicited.
   ```
   import mxnet as mx
   from mxnet import autograd
   
   
   data = mx.nd.arange(16, dtype='float64').reshape((4, 4))
   data.attach_grad()
   
   with autograd.record():
       y = mx.nd.split(data, axis=0, num_outputs=2)
   y[0].backward()
   print(data.grad)
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to