I recently ran into an application where I had to compute many inner products 
quickly (roughy 50k inner products in less than a second). I wanted a vector of 
inner products over the 50k vectors, or `[x1.T @ A @ x1, …, xn.T @ A @ xn]` 
with A.shape = (1k, 1k).

My first instinct was to look for a NumPy function to quickly compute this, 
such as np.inner. However, it looks like np.inner has some other behavior and I 
couldn’t get tensordot/einsum to work for me.

Then a labmate pointed out that I can just do some slick matrix multiplication 
to compute the same quantity, `(X.T * A @ X.T).sum(axis=0)`. I opened [a PR] 
with this, and proposed that we define a new function called `inner_prods` for 
this.

However, in the PR, @shoyer pointed out 

> The main challenge is to figure out how to transition the behavior of all 
>these operations, while preserving backwards compatibility. Quite likely, we 
>need to pick new names for these functions, though we should try to pick 
>something that doesn't suggest that they are second class alternatives.

Do we choose new function names? Do we add a keyword arg that changes what 
np.inner returns?

[a PR]:https://github.com/numpy/numpy/pull/7690



_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to