t-vi commented on pull request #6472:
URL: https://github.com/apache/incubator-tvm/pull/6472#issuecomment-693211825
Yeah, any op ending in `_` will be inplace and there are many.
Without keeping track of memory locations some construct to represent
`copy_` we likely cannot faithfully represent all possible uses of inplace ops.
So without a pure way, we have to balance the two
- With inplace, we will take in idioms we cannot support if we do so without
further analysis. (Because the example would also apply to `clamp_`, as seen
below.)
- Without things like `relu_`, we might exclude many networks (inplace relu_
after something where the output isn't needed for the backward is quite
popular) that would otherwise run fine.
This appears also to be the rationale for the ad-hoc inplace removal for
ONNX export, and I'm sure the PyTorch-ONNX people had a good think (and they
have more alias analysis available to them) if there are better options. So I
guess whatever they let go through by just assuming inplace can be made
out-of-place might also be desirable to treat this way here. Maybe the list
they use is a good guide.
I would imagine that someone somewhere does things like
```
def pn_relu(x):
x[0].clamp_(min=0)
x[1].clamp_(max=0)
```
(so the nonzero bits would be normally distributed for normally distributed
inputs to be clever about exploding/vanishing activations), but I don't think
it's terribly popular, so I would not worry about it.
For maskrcnn-benchmark in particular, I did a bit of tracing almost two
years ago when I experimented with my PyTorch POC port to Android (but the
tracing PR was never merged) and should be easy to remove their inplace use.
But it would seem that it's more useful to focus on the TorchVision
implementations, maskrcnn-benchmark is really dormant and my impression was
that it itself wasn't fixed much but instead the lessons learned from there
have flown into TorchVision and people have worked on JIT/ONNX/Quantization
properties there.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]