sxjscience opened a new pull request #10029: Layer Norm
   ## Description ##
   1. Directly implement layer normalization in C++. The speed and memory cost 
are both better than the way of stacking the broadcast/reduce OPs. Solves
   2. Add LayerNorm in Gluon
   3. Fix the doc of InstanceNorm. In InstanceNorm, the real axis to normalize 
the input tensor is all axes excluding the 0th axis and the given axis.
   4. Fix the doc of BatchNorm, the inverse std instead of the var is set as 
the output. Should fix
   ## Checklist ##
   ### Essentials ###
   - [x] Passed code style checking (`make lint`)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   ### Changes ###
   - [x] LayerNorm in C++/Gluon, tests
   - [x] Fix Doc of InstanceNorm
   - [x] Fix Doc of BatchNorm
   ## Comments ##
   We can improve the speed further by fusing the operators. This is left as 
future work.

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:

With regards,
Apache Git Services

Reply via email to