alexwong edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-584390133
 
 
   > @alexwong It seems you have problems with alexnet, vgg and mobilenet v2 on 
cuda. In my refactored version, I have no problem with these three. Have a look 
and try my script below. You can parse the module in two ways and compare the 
difference.
   > 
https://github.com/masahi/torchscript-to-tvm/blob/master/torchvision_test.py#L51-L60
   > 
   > I guess the issue is in dtype or optional arguments handling in your op 
conversions. I've prepared [a 
branch](https://github.com/masahi/tvm/tree/torch-refactor) for the refactoring 
PR based on your current implementation, and I can reproduce errors on alexnet, 
vgg and mobilenet v2.
   > 
   > The difference between this branch and the implementation at 
`torchscript-to-tvm` is mostly on op conversion map, that's why I think 
problems are there.
   
   I compared the produced relay graph for mobilenet, vgg, and alexnet and they 
look the same so I'm not sure if it's a parsing issue. VGG and AlexNet have had 
issues with accuracy but the mobilenet issue is a memory thing I think.
   
   `
   CUDAError: Check failed: ret == 0 (-1 vs. 0) : 
cuModuleLoadData(&(module_[device_id]), data_.c_str()) failed with error: 
CUDA_ERROR_INVALID_PTX
   `
   
   I'm reverting to what was passing previously and will re-apply the recent 
changes later tonight. For memory issues, I'm not sure what else I can try at 
this point. It's already pretty extreme about cleaning everything after testing 
a model.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to