KexinFeng opened a new pull request, #21089: URL: https://github.com/apache/incubator-mxnet/pull/21089
## Description ## This is the same as [PR](https://github.com/apache/incubator-mxnet/pull/20559). The PR adds the support for fetching the gradients of intermediate variables in a gluon HybridizedBlock. This applies uniformly to both when `block.hybridize()` is on and off. This generates the `attach_grad` implemented in implemented in [PR#20500](https://github.com/apache/incubator-mxnet/pull/20500). The motivation of this feature comes from this [issue#11865](https://github.com/apache/incubator-mxnet/issues/11865). ## Checklist ## ### Essentials ### - [x] PR's title starts with a category (e.g. [BUGFIX], [MODEL], [TUTORIAL], [FEATURE], [DOC], etc) - [x] Changes are complete (i.e. I finished coding on this PR) - [x] All changes have test coverage - [x] Code is well-documented ### Changes ### - [x] `block.py` where `mark_vars` and `get_mark_vars` are added along with `MXNDArrayMarkDCVariables`. - [x] `cached_op.invoke` in cpp backend and `CachedOp.__call__` have been editted to include the pass of marked nonleaf ndarrays. - [x] `set_nleafs` method is added into `CachedOp` class to store the marked nonleaf ndarrays. - [x] Inside `void RunGraph`, marked nonleaf ndarrays are linked to the marked computational node for autograd computation. ## Comments ## - This feature is built on top of [PR#20500](https://github.com/apache/incubator-mxnet/pull/20500). The modification here is mainly in the invoke of CachedOp computation. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
