t-vi commented on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643457334


   > If you want to use names chosen by Torch, how are you going to figure out 
the correct names to give to TVM at deploy time? The names are the one attached 
to the graph after this line 
https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/frontend/pytorch.py#L2504,
 rather than the graph you supply to the frontend. You also need to remember 
whatever names Torch chooses until deploy time, since TVM doesn't export input 
names but they are needed to correctly set inputs.
   
   Thank you for insisting on using stable names. The user-supplied(!) names 
are the part before the (last, ha, here only) `.` and they're stable. This is 
e.g. what PyTorch itself does when you print `script_module.code` or to give 
you the name of the argument when you are missing an input.
   
   The function ultimately doing this in PyTorch is DebugNameBase:
   
https://github.com/pytorch/pytorch/blob/a9aa6367c2b1647f1d2772678f9971740c598c7a/torch/csrc/jit/ir/ir.cpp#L735
   
   > Passing dtypes is not something we (not only pytorch, but other frontends 
too) thought about, since we always assume float32 inputs. We can discuss how 
to integrate them. But most of the times inputs are fp32, so I don't want to 
introduce breaking API changes to allow dtype option.
   
   I have to strongly differ that most inputs are fp32, starting with anything 
NLP.
   Again, I think it is a misunderstanding that any of this has breaking API 
changes, but the suggestion is to make things more optional. I do see that 
splitting of the disambiguation counter is a good idea. But then we should just 
take what the user supplied in the  the model definition.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to