jwfromm opened a new pull request, #12586:
URL: https://github.com/apache/tvm/pull/12586

   One change made in https://github.com/apache/tvm/pull/5252 (which added 
support for Hexagon to the runtime) was increasing the byte alignment from 64 
to 128. This can cause problems when interacting with dlpack. For example 
tests/python/contrib/test_dlpack.py has a high chance of failing when run 
locally due to torch returning tensors with 64 rather than 128 byte alignment. 
I'm not sure why it doesnt fail in CI, perhaps the consistency of the 
environment always returns an appropriately aligned tensor.
   
   Changing the default alignment allows interoperability with both torch and 
newer versions of numpy that support dlpack. I've slightly modified the torch 
test to run multiple times to make sure its behavior is consistent.
   
   See previous discussion in #12564. I chatted with @vinx13 and it seems like 
default 64 byte alignment should be fine for CUDA, so this change wont break 
anything. I'm reopening this pull request (in a new pr as I did a rebase and it 
wont let me reopen the previous one). I think this change is still likely a 
positive change while we work out a long term target based solution.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to