Re: [Numpy-discussion] Fwd: Backslash operator A\b and np/sp.linalg.solve

2017-01-10 Thread CJ Carey
I agree that this seems more like a scipy feature than a numpy feature. Users with structured matrices often use a sparse matrix format, though the API for using them in solvers could use some work. (I have a work-in-progress PR along those lines here: https://github.com/scipy/scipy/pull/6331)

Re: [Numpy-discussion] Deprecating matrices.

2017-01-07 Thread CJ Carey
> Decoupled or not, sparse still needs to be dealt with. What is the plan? > My view would be: - keep current sparse matrices as is (with improvements, like __numpy_func__ and the various performance improvements that regularly get done) - once one of the sparse *array* implementations progresses

Re: [Numpy-discussion] Deprecating matrices.

2017-01-07 Thread CJ Carey
I agree with Ralf; coupling these changes to sparse is a bad idea. I think that scipy.sparse will be an important consideration during the deprecation process, though, perhaps as an indicator of how painful the transition might be for third party code. I'm +1 for splitting matrices out into a

Re: [Numpy-discussion] Deprecating matrices.

2017-01-06 Thread CJ Carey
On Fri, Jan 6, 2017 at 6:19 PM, Ralf Gommers wrote: > This sounds like a reasonable idea. Timeline could be something like: > > 1. Now: create new package, deprecate np.matrix in docs. > 2. In say 1.5 years: start issuing visible deprecation warnings in numpy > 3. After

Re: [Numpy-discussion] New Indexing Methods Revival #N (subclasses!)

2016-09-06 Thread CJ Carey
I'm also in the non-subclass array-like camp, and I'd love to just write vindex and oindex methods, then have: def __getitem__(self, idx): return np.dispatch_getitem(self, idx) Where "dispatch_getitem" does some basic argument checking and calls either vindex or oindex as appropriate. Maybe

Re: [Numpy-discussion] ENH: compute many inner products quickly

2016-06-05 Thread CJ Carey
A simple workaround gets the speed back: In [11]: %timeit (X.T * A.dot(X.T)).sum(axis=0) 1 loop, best of 3: 612 ms per loop In [12]: %timeit np.einsum('ij,ji->j', A.dot(X.T), X) 1 loop, best of 3: 414 ms per loop If working as advertised, the code in gh-5488 will convert the three-argument

Re: [Numpy-discussion] Make np.bincount output same dtype as weights

2016-03-28 Thread CJ Carey
Another +1 for Josef's interpretation from me. Consistency with np.sum seems like the best option. On Sat, Mar 26, 2016 at 11:12 PM, Juan Nunez-Iglesias wrote: > Thanks for clarifying, Jaime, and fwiw I agree with Josef: I would expect > np.bincount to behave like np.sum

Re: [Numpy-discussion] 1.10.3 release tomorrow, 1.11.x branch this month.

2016-01-05 Thread CJ Carey
I'll echo Marten's sentiments. I've found __numpy_ufunc__ as it exists in the master branch to be quite useful in my experiments with sparse arrays ( https://github.com/perimosocordiae/sparray), and I think it'll be a net benefit to scipy.sparse as well (despite the unpleasantness with __mul__).

Re: [Numpy-discussion] A minor clarification no why count_nonzero is faster for boolean arrays

2015-12-17 Thread CJ Carey
I believe this line is the reason: https://github.com/numpy/numpy/blob/c0e48cfbbdef9cca954b0c4edd0052e1ec8a30aa/numpy/core/src/multiarray/item_selection.c#L2110 On Thu, Dec 17, 2015 at 11:52 AM, Raghav R V wrote: > I was just playing with `count_nonzero` and found it to be

Re: [Numpy-discussion] asarray(sparse) -> object

2015-11-20 Thread CJ Carey
The short answer is: "kind of". These two Github issues explain what's going on more in-depth: https://github.com/scipy/scipy/issues/3995 https://github.com/scipy/scipy/issues/4239 As for the warning only showing once, that's Python's default behavior for warnings: