ptrendx commented on pull request #20375:
URL: https://github.com/apache/incubator-mxnet/pull/20375#issuecomment-868215426


   Would it be more beneficial to have the full multihead attention primitive 
exposed in the API instead (it could still be implemented by the interleaved 
matmuls)? Both Keras and pyTorch have it as a layer, and this could make the 
usage easier. What do you think @barry-jin?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to