blchu opened a new pull request #18085: No tensor cores for fp32 interleaved attention, remove div by 8 restiction (#17994) URL: https://github.com/apache/incubator-mxnet/pull/18085 (cherry picked from commit afae030beb168f09cf08be101714e059157a9507) ## Description ## Fixed issue where fp32 inputs use tensor cores for the interleaved multihead attention operators, resulting in lower precision calculations and potential reduction in accuracy. ## Checklist ## ### Essentials ### - [ ] Changes are complete (i.e. I finished coding on this PR) - [ ] All changes have test coverage: - [ ] To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change ### Changes ### - [ ] Set interleaved multihead attention GEMM default to not use tensor cores, and only use if input data type is fp16 - [ ] No longer checks for tensor input shape divisibility by 8 ## Comments ##
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
