Caenorst commented on a change in pull request #16408: Add MXNet Ops for fast
multihead attention
URL: https://github.com/apache/incubator-mxnet/pull/16408#discussion_r338185540
##########
File path: src/operator/contrib/transformer-inl.h
##########
@@ -34,6 +34,18 @@
namespace mxnet {
namespace op {
+struct InterleavedMatMulParam : public dmlc::Parameter<InterleavedMatMulParam>
{
+ int heads;
+ bool bwd_ignore_zero_init;
+ DMLC_DECLARE_PARAMETER(InterleavedMatMulParam) {
+ DMLC_DECLARE_FIELD(heads)
+ .describe("Set number of heads");
+ DMLC_DECLARE_FIELD(bwd_ignore_zero_init)
+ .describe("Make backward pass ignore AddTo and not init to 0.")
Review comment:
Sure, it's kind of tricky to explain. Until we don't have the gradient
accumulation feature working the user should not use this flag, it's cheating
on purpose relying on the fact that the two ops of self attention are using
complementary parts of the input tensor (not overlapping and the sum of all is
the full tensor). If you want to use gradient accumulation with another Op you
should not enable this flag. Any indication on wording would be appreciated.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services