Caenorst opened a new pull request #16408: Add MXNet Ops for fast multihead 
attention
URL: https://github.com/apache/incubator-mxnet/pull/16408
 
 
   ## Description ##
   Add new optimized Ops for Multihead attention
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [X] Changes are complete (i.e. I finished coding on this PR)
   - [X] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - [X] Code is well-documented: 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - [X] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - Add 4 Ops: (matmul(K,Q) and matmul(attention_weights, V). For both 
self-attention and encoder-decoder
   - Add unit test for those Ops
   
   ## Comments ##
   - https://github.com/Caenorst/gluon-nlp/tree/fast_mha shows a example of 
integration in BERT, it will not be PRed to gluon-nlp as it's breaking it (out 
of BERT usage).
   - Those Ops require to have a different layout (sequence, batch, encoding) 
except for the masked softmax / dropout
   - Those Ops change the ordering of projection weights, which means a 
pretrained BERT without those Ops need to have the weights processed as in: 
https://github.com/Caenorst/incubator-mxnet/commit/e98761456ba0343664ba550e056f00db31516ac7#diff-4758fb9329d438de2836db2634a8f5f7R2505-R2519
 in order to use those Ops.
   - The argument `bwd_ignore_zero_init` allow to further speedup and reduce 
the memory consumption but is only giving good results with 
`MXNET_EXEC_ENABLE_ADDTO` set to 1, it's also a dirty trick as it's actually 
not "adding to" but initializing (it rely on the fact that the two ops inputs 
are actually complete (not overlapping and using the whole tensor), despite 
using the same tensor.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to