I agree that this seems more like a scipy feature than a numpy feature.
Users with structured matrices often use a sparse matrix format, though the
API for using them in solvers could use some work. (I have a
work-in-progress PR along those lines here:
https://github.com/scipy/scipy/pull/6331)
> Decoupled or not, sparse still needs to be dealt with. What is the plan?
>
My view would be:
- keep current sparse matrices as is (with improvements, like
__numpy_func__ and the various performance improvements that regularly get
done)
- once one of the sparse *array* implementations progresses
I agree with Ralf; coupling these changes to sparse is a bad idea.
I think that scipy.sparse will be an important consideration during the
deprecation process, though, perhaps as an indicator of how painful the
transition might be for third party code.
I'm +1 for splitting matrices out into a
On Fri, Jan 6, 2017 at 6:19 PM, Ralf Gommers wrote:
> This sounds like a reasonable idea. Timeline could be something like:
>
> 1. Now: create new package, deprecate np.matrix in docs.
> 2. In say 1.5 years: start issuing visible deprecation warnings in numpy
> 3. After
I'm also in the non-subclass array-like camp, and I'd love to just write
vindex and oindex methods, then have:
def __getitem__(self, idx):
return np.dispatch_getitem(self, idx)
Where "dispatch_getitem" does some basic argument checking and calls either
vindex or oindex as appropriate.
Maybe
A simple workaround gets the speed back:
In [11]: %timeit (X.T * A.dot(X.T)).sum(axis=0)
1 loop, best of 3: 612 ms per loop
In [12]: %timeit np.einsum('ij,ji->j', A.dot(X.T), X)
1 loop, best of 3: 414 ms per loop
If working as advertised, the code in gh-5488 will convert the
three-argument
Another +1 for Josef's interpretation from me. Consistency with np.sum
seems like the best option.
On Sat, Mar 26, 2016 at 11:12 PM, Juan Nunez-Iglesias
wrote:
> Thanks for clarifying, Jaime, and fwiw I agree with Josef: I would expect
> np.bincount to behave like np.sum
I'll echo Marten's sentiments. I've found __numpy_ufunc__ as it exists in
the master branch to be quite useful in my experiments with sparse arrays (
https://github.com/perimosocordiae/sparray), and I think it'll be a net
benefit to scipy.sparse as well (despite the unpleasantness with __mul__).
I believe this line is the reason:
https://github.com/numpy/numpy/blob/c0e48cfbbdef9cca954b0c4edd0052e1ec8a30aa/numpy/core/src/multiarray/item_selection.c#L2110
On Thu, Dec 17, 2015 at 11:52 AM, Raghav R V wrote:
> I was just playing with `count_nonzero` and found it to be
The short answer is: "kind of".
These two Github issues explain what's going on more in-depth:
https://github.com/scipy/scipy/issues/3995
https://github.com/scipy/scipy/issues/4239
As for the warning only showing once, that's Python's default behavior for
warnings:
10 matches
Mail list logo