icemelon9 commented on issue #4324: [Relay] Use opaque for where op URL: https://github.com/apache/incubator-tvm/pull/4324#issuecomment-566345060 @tqchen @kevinthesun After further investigation, I found that when the sequence length is 256, the fused batch_matmul + ones_like + where is significantly slower (~50%) than the non-fused version (code shown in below). I checked the schedule and found out that the culprit is likely to be the memory allocation in the fused op. This also explains why there's no much performance penalty when seq length is smaller. Here's the example I used. ``` fn (%x: Tensor[(12, 256, 64), float32], %y: Tensor[(12, 256, 64), float32], %z: Tensor[(12, 256, 256), float32]) { %0 = nn.batch_matmul(%x, %y); %1 = ones_like(%0); where(%z, %0, %1) } ``` A better solution to solve this should expose the workspace allocation into the memory planning as well. But for now, should we just use opaque for where?
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
