t-vi edited a comment on pull request #7231:
URL: https://github.com/apache/tvm/pull/7231#issuecomment-756764112


   I don't want to ruin the party, but does `unsqueeze_` work as is?
   We would want to update future use of the input to refer to the output (same 
for any of the "simple" inplace).
   I can see that there is clearly interest in building support for inplace 
ops, but I think we would need to actually put in substantial effort if we want 
this to be a happy story.
   
   ```python
   @torch.jit.script
   def foo(x):
     y = x.unsqueeze_(0)
     return x
   ```
   
   As far as I know, we're only supporting "strided" tensors (i.e. blob of 
memory + sizes + strides define the tensor), so tracking views across these 
should be doable (one could, but doesn't have to) see the signature annotations 
in ATen's native_functions.yaml to see which of the ops we have need to be 
handled. One of the tricky ones would be reshape which is sometimes just a view 
and at other times (when viewing is impossible, eg `a[:, :-1].reshape(-1)`) it 
is a copy.
   
   The other option could be to do this on the PyTorch side, improving the pass 
that @masahi highlighted.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to