Hi everyone, and thank you Ralf for carrying the flag in my absence. =D
Sebastian, the *primary* motivation behind avoiding detach() in PyTorch is
listed in original post of the PyTorch issue:
> People not very familiar with `requires_grad` and cpu/gpu Tensors might go
> back and forth with numpy. For example doing pytorch -> numpy -> pytorch and
> backward on the last Tensor. This will backward without issue but not all the
> way to the first part of the code and won’t raise any error.
The PyTorch team are concerned that they will be overwhelmed with help requests
if np.array() silently succeeds on a tensor with gradients. I definitely get
that.
Avoiding .gpu() is more straightforwardly about avoiding implicit expensive
computation.
> while others do not choose to teach about it. There seems very little
> or even no "promise" attached to either `force=True` or `force=False`.
NumPy can set a precedent through policy. The *only* reason client libraries
would implement `__array__` is to play well with NumPy, so if NumPy documents
that `force=True` should *always* succeed, we can expect client libraries to
follow suit. At least the PyTorch devs have indicated that they would be open
to this.
> E.g. Napari wants to use it, but do the array-providers want Napari to use it?
As Ralf pointed out, the PyTorch devs have already agreed to it.
>From the napari perspective, we'd be ok with leaving the decision on warnings
>to client libraries. We may or may not suppress them depending on user
>requests. ;) But the point is to have a way of saying "give me a NumPy array
>DAMMIT" without having to know about all the possible array libraries. Which
>are numerous and getting numerouser.
Ralf, you said you don't want warnings — even for sparse arrays? That was an
area of concern for you on the PyTorch discussion side.
> And if the conversion still gives warnings for some array-objects, have we
> actually gained much?
Yes.
Hameer,
> I would advocate for a `force=` kwarg but personally I don't think it's
> explicit enough, but probably as explicit as can be given NumPy's API.
Yeah, I agree that force is kind of vague, which is why I was looking for
things like `allow_copy`. But it is hard to be general enough here: sparse
requires an expensive instantiation, cupy requires copying from gpu to cpu,
dask requires arbitrary computation, xarray requires information loss... I'm
inclined to agree with Ralf that force= is the only generic-enough term, but
I'm happy to entertain other options!
Juan.
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion