> Also, can’t get __array_wrap__ to work. The arguments it receives after
> __iadd__ are all
> post-operation. Decided not to do it this way this time so not to hardcode
> such functionality
> into the class, but if there is a way to robustly achieve this it would be
> good to know.
It is
> One more thing to mention on this topic.
>
> From a certain size dot product becomes faster than sum (due to
> parallelisation I guess?).
>
> E.g.
> def dotsum(arr):
> a = arr.reshape(1000, 100)
> return a.dot(np.ones(100)).sum()
>
> a = np.ones(10)
>
> In [45]: %timeit
> From my experience, calling methods is generally faster than
> functions. I figure it is due to having less overhead figuring out the
> input. Maybe it is not significant for large data, but it does make a
> difference even when working for medium sized arrays - say float size
> 5000.
>
>
> What were your conclusions after experimenting with chained ufuncs?
>
> If the speed is comparable to numexpr, wouldn’t it be `nicer` to have
> non-string input format?
>
> It would feel a bit less like a black-box.
I haven't gotten further than it yet, it is just some toying around I've
been
Hi Oyibo,
> I'm proposing the introduction of a `pipe` method for NumPy arrays to enhance
> their usability and expressiveness.
I think it is an interesting idea, but agree with Robert that it is
unlikely to fly on its own. Part of the logic of even frowning on
methods like .mean() and .sum()
> For my own work, I required the intersect1d function to work on multiple
> arrays while returning the indices (using `return_indizes=True`).
> Consequently I changed the function in numpy and now I am seeking
> feedback from the community.
>
> This is the corresponding PR:
> I tend to agree with not using the complex conjugate for vecmat, but would
> prefer having
> separate functions for that that make it explicit in the name. I also note
> that mathematicians
> use sesquilinear forms, which have the vector conjugate on the other side, so
> there are
> different
Hi Alan,
The problem with .dot is not that it is not possible, but more that it
is not obvious exactly what will happen given the overloading of
multiple use cases; indeed, this is why `np.matmul` was created. For
the stacks of vectors case in particular, it is surprising that the
vector
> Could you please offer some code or math notation to help communicate this?
> I am forced to guess at the need.
>
> The words "matrix" and "vector" are ambiguous.
> After all, matrices (of given shape) are a type of vector (i.e., can be added
> and scaled.)
> So if by "matrix" you mean "2d
> FWIW, +1 for matvec & vecmat to complement matmat (erm, matmul). Having a
> binop where one argument is a matrix and the other is a
> stack/batch of vectors is indeed awkward otherwise, and a dedicated function
> to clearly distinguish "two matrices" from "a matrix and a
> batch of vectors"
> Why do these belong in NumPy? What is the broad field of application of these
> functions? And,
> does a more general concept underpin them?
Multiplication of a matrix with a vector is about as common as matrix
with matrix or vector with vector, and not currently easy to do for
stacks of
> I can understand the desire to generalise the idea of matrix
> multiplication for when the arrays are not both 2-D but taking the
> complex conjugate makes absolutely no sense in the context of matrix
> multiplication.
>
> You note above that "vecmat is defined as x†A" but my interpretation
>
>> I also note that for complex numbers, `vecmat` is defined as `x†A`,
>> i.e., the complex conjugate of the vector is taken. This seems to be the
>> standard and is what we used for `vecdot` too (`x†x`). However, it is
>> *not* what `matmul` does for vector-matrix or indeed vector-vector
>>
> For dot product I can convince myself this is a math definition thing and
> accept the
> conjugation. But for "vecmat" why the complex conjugate of the vector? Are we
> assuming that
> 1D things are always columns. I am also a bit lost on the difference of dot,
> vdot and vecdot.
>
> Also if
Hi All,
I have a PR [1] that adds `np.matvec` and `np.vecmat` gufuncs for
matrix-vector and vector-matrix calculations, to add to plain
matrix-matrix multiplication with `np.matmul` and the inner vector
product with `np.vecdot`. They call BLAS where possible for speed.
I'd like to hear whether
Hi All,
Thanks for the comments on complex sign - it seems there is good support
for it.
On copysign, currently it is not supported for complex values at all. I
think given the responses so far, it looks like we should just keep it
like that; although my extension was fairly logical, I cannot
Hi Sebastian,
> That looks nice, I don't have a clear feeling on the order of items, if
> we think of it in terms of `(start, stop)` there was also the idea
> voiced to simply add another name in which case you would allow start
> and stop to be separate arrays.
Yes, one could add another
Hi Martin,
I agree it is a long-standing issue, and I was reminded of it by your
comment. I have a draft PR at https://github.com/numpy/numpy/pull/25476
that does not change the old behaviour, but allows you to pass in a
start-stop array which behaves more sensibly (exact API TBD).
Please have
Hi Ralf,
I realize you feel strongly that this whole thread is rehashing history,
but I think it is worth pointing out that many seem to consider that the
criterion for allowing backward incompatible changes, i.e., that "existing
code is buggy or is consistently confusing many users", is actually
> The main motivation for the @ PEP was actually to be able to get rid of
> objects like np.matrix and scipy.sparse matrices that redefine the meaning
> of the * operator. Quote: "This PEP proposes the minimum effective change
> to Python syntax that will allow us to drain this swamp [meaning
Hi Ralf,
On Tue, Jun 25, 2019 at 6:31 PM Ralf Gommers wrote:
>
>
> On Tue, Jun 25, 2019 at 11:02 PM Marten van Kerkwijk <
> m.h.vankerkw...@gmail.com> wrote:
>
>>
>> For the names, my suggestion of lower-casing the M in the initial one,
>> i.e., `.mT`
Hi Juan,
On Tue, Jun 25, 2019 at 9:35 AM Juan Nunez-Iglesias
wrote:
> On Mon, 24 Jun 2019, at 11:25 PM, Marten van Kerkwijk wrote:
>
> Just to be sure: for a 1-d array, you'd both consider `.T` giving a shape
> of `(n, 1)` the right behaviour? I.e., it should still change from what
Hi All,
The examples with different notation brought back memory of another
solution: define
`m.ᵀ` and m.ᴴ`. This is possible, since python3 allows any unicode for
names, nicely readable, but admittedly a bit annoying to enter (in emacs,
set-input-method to TeX and then ^T, ^H).
More seriously,
On Mon, Jun 24, 2019 at 7:21 PM Stephan Hoyer wrote:
> On Mon, Jun 24, 2019 at 3:56 PM Allan Haldane
> wrote:
>
>> I'm not at all set on that behavior and we can do something else. For
>> now, I chose this way since it seemed to best match the "IGNORE" mask
>> behavior.
>>
>> The behavior you
Hi Allan,
> The alternative solution in my model would be to replace `np.dot` with a
> > masked-specific implementation of what `np.dot` is supposed to stand for
> > (in your simple example, `np.add.reduce(np.multiply(m, m))` - more
> > generally, add relevant `outer` and `axes`). This would be
Hi Eric,
The easiest definitely is for the mask to just propagate, which that even
if just one point is masked, all points in the fft will be masked.
On the direct point I made, I think it is correct that since one can think
of the Fourier transform of a sine/cosine fit, then there is a solution
Hi Allan,
Thanks for bringing up the noclobber explicitly (and Stephan for asking for
clarification; I was similarly confused).
It does clarify the difference in mental picture. In mine, the operation
would indeed be guaranteed to be done on the underlying data, without copy
and without
Hi Stephan,
Yes, the complex conjugate dtype would make things a lot faster, but I
don't quite see why we would wait for that with introducing the `.H`
property.
I do agree that `.H` is the correct name, giving most immediate clarity
(i.e., people who know what conjugate transpose is, will
Hi Eric,
On your other points:
I remain unconvinced that Mask classes should behave differently on
> different ufuncs. I don’t think np.minimum(ignore_na, b) is any different
> to np.add(ignore_na, b) - either both should produce b, or both should
> produce ignore_na. I would lean towards
Hi Stephan,
Eric perhaps explained my concept better than I could!
I do agree that, as written, your example would be clearer, but Allan's
code and the current MaskedArray code do have not that much semblance to
it, and mine even less, as they deal with operators as whole groups.
For mine, it
I had not looked at any implementation (only remembered the nice idea of
"importing from the future"), and looking at the links Eric shared, it
seems that the only way this would work is, effectively, pre-compilation
doing a `.replace('.T', '._T_from_the_future')`, where you'd be
hoping that there
Hi Stephan,
In slightly changed order:
Let me try to make the API issue more concrete. Suppose we have a
> MaskedArray with values [1, 2, NA]. How do I get:
> 1. The sum ignoring masked values, i.e., 3.
> 2. The sum that is tainted by masked values, i.e., NA.
>
> Here's how this works with
Hi Tom,
I think a sensible alternative mental model for the MaskedArray class is
>> that all it does is forward any operations to the data it holds and
>> separately propagate a mask,
>>
>
> I'm generally on-board with that mental picture, and agree that the
> use-case described by Ben
> I think a sensible alternative mental model for the MaskedArray class is
>> that all it does is forward any operations to the data it holds and
>> separately propagate a mask, ORing elements together for binary operations,
>> etc., and explicitly skipping masked elements in reductions (ideally
ces copies to
> be explicit in user code. 2. disallowing direct modification of the mask
> lowers the "API surface area" making people's MaskedArray code less
> buggy and easier to read: Exposure of nonsense values by "unmasking" is
> one less possibility to keep in mind.
&g
`); is this on
purpose?
Anyway, it would seem easily at the point where I should comment on your
repository rather than in the mailing list!
All the best,
Marten
On Wed, Jun 19, 2019 at 5:45 PM Allan Haldane
wrote:
> On 6/18/19 2:04 PM, Marten van Kerkwijk wrote:
> >
> >
> > On Tue, Jun
On Tue, Jun 18, 2019 at 12:55 PM Allan Haldane
wrote:
> > This may be too much to ask from the initializer, but, if so, it still
> > seems most useful if it is made as easy as possible to do, say, `class
> > MaskedQuantity(Masked, Quantity): `.
>
> Currently MaskedArray does not accept
Hi Allen,
Thanks for the message and link! In astropy, we've been struggling with
masking a lot, and one of the main conclusions I have reached is that
ideally one has a more abstract `Masked` class that can take any type of
data (including `ndarray`, of course), and behaves like that data as
rward.
All the best,
Marten
p.s. And, yes, `__array_function__` is quite wonderful!
On Fri, Jun 14, 2019 at 3:46 AM Ralf Gommers wrote:
>
>
> On Fri, Jun 14, 2019 at 2:21 AM Marten van Kerkwijk <
> m.h.vankerkw...@gmail.com> wrote:
>
>> Hi Ralf,
>>
>>
Hi Ralf,
Thanks both for the reply and sharing the link. I recognize much (from both
sides!).
>
> More importantly, I think we should not even consider *discussing*
> removing` __array_function__` from np.isposinf (or any similar one off
> situation) before there's a new bigger picture design.
Hi Ralf,
>> I guess the one immediate question is whether `np.sum` and the like
>> should be overridden by `__array_function__` at all, given that what should
>> be the future recommended override already works.
>>
>
> I'm not sure I understand the rationale for this. Design consistency
>
On Thu, Jun 13, 2019 at 12:46 PM Stephan Hoyer wrote:
>
>
>> But how about `np.sum` itself? Right now, it is overridden by
>> __array_function__ but classes without __array_function__ support can also
>> override it through the method lookup and through __array_ufunc__.
>>
>> Would/should there
Hi Ralf, others,
>> Anyway, I guess this is still a good example to consider for how we
>> should go about getting to a new implementation, ideally with just a
>> single-way to override?
>>
>> Indeed, how do we actually envisage deprecating the use of
>> `__array_function__` for a given part of
tephan Hoyer wrote:
>
>> On Wed, Jun 12, 2019 at 5:55 PM Marten van Kerkwijk <
>> m.h.vankerkw...@gmail.com> wrote:
>>
>>> Hi Ralf,
>>>
>>> You're right, the problem is with the added keyword argument (which
>>> would appear also if we
The attrs like you sent definitely sounded like it would translate to numpy
nearly trivially. I'm very much in favour!
-- Marten
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
, 2019 at 4:32 PM Ralf Gommers wrote:
>
>
> On Wed, Jun 12, 2019 at 12:02 AM Stefan van der Walt
> wrote:
>
>> On Tue, 11 Jun 2019 15:10:16 -0400, Marten van Kerkwijk wrote:
>> > In a way, I brought it up mostly as a concrete example of an internal
>> >
Overall, in favour of splitting the large files, but I don't like that the
notes stop being under version control (e.g., a follow-up PR slightly
changes things, how does the note gets edited/reverted?).
Has there been any discussion of having, e.g., a directory
`docs/1.17.0-notes/`, and everyone
HI Sebastian,
Thanks for the overview! In the value-based casting, what perhaps surprises
me most is that it is done within a kind; it would seem an improvement to
check whether a given integer scalar is exactly representable in a given
float (your example of 1024 in `float16`). If we switch to
> In a way, I brought it up mostly as a concrete example of an internal
> implementation which we cannot change to an objectively cleaner one because
> other packages rely on an out-of-date numpy API.
>
> Should have added: rely on an out-of-date numpy API where we have multiple
ways for packages
Hi All,
In https://github.com/numpy/numpy/pull/12801, Tyler has been trying to use
the new `where` argument for reductions to implement `nansum`, etc., using
simplifications that boil down to `np.sum(..., where=~isnan(...))`.
A problem that occurs is that `np.sum` will use a `.sum` method if
On Fri, Jun 7, 2019 at 1:19 AM Ralf Gommers wrote:
>
>
> On Fri, Jun 7, 2019 at 1:37 AM Nathaniel Smith wrote:
>
>>
>> My intuition is that what users actually want is for *native Python
>> types* to be treated as having 'underspecified' dtypes, e.g. int is
>> happy to coerce to
Hi Sebastian,
Tricky! It seems a balance between unexpected memory blow-up and unexpected
wrapping (the latter mostly for integers).
Some comments specifically on your message first, then some more general
related ones.
1. I'm very much against letting `a + b` do anything else than `np.add(a,
Hi Stefan,
On Mon, Jun 3, 2019 at 4:26 PM Stefan van der Walt
wrote:
> Hi Marten,
>
> On Sat, 01 Jun 2019 12:11:38 -0400, Marten van Kerkwijk wrote:
> > Third, we could actual implementing the logical groupings identified in
> the
> > code base (and describing them!).
On Sun, Jun 2, 2019 at 2:21 PM Eric Wieser
wrote:
> Some of your categories here sound like they might be suitable for ABCs
> that provide mixin methods, which is something I think Hameer suggested in
> the past. Perhaps it's worth re-exploring that avenue.
>
> Eric
>
>
Indeed, and of course for
> Our API is huge. A simple count:
> main namespace: 600
> fft: 30
> linalg: 30
> random: 60
> ndarray: 70
> lib: 20
> lib.npyio: 35
> etc. (many more ill-thought out but not clearly private submodules)
>
>
I would perhaps start with ndarray itself. Quite a lot seems superfluous
Shapes:
- need:
> > In this respect, I think an excellent place to start might be
>> > something you are planning already anyway: update the user
>> > documentation
>> >
>>
>> I would include tests as well. Rather than hammer out a full standard
>> based on extensive discussions and negotiations, I
Hi Ralf,
Despite sharing Nathaniel's doubts about the ease of defining the numpy API
and the likelihood of people actually sticking to a limited subset of what
numpy exposes, I quite like the actual things you propose to do!
But my liking it is for reasons that are different from your stated
Hi Sebastian, Stéfan,
Thanks for the very good summaries!
An additional item worth mentioning is that by using
`__skip_array_function__` everywhere inside, one minimizes the performance
penalty of checking for `__array_function__`. It would obviously be worth
trying to do that, but ideally in a
I agree that we should not have two functions
I also am rather unsure whether a ufunc is a good idea. Earlier, while
discussing other possible additions, like `erf`, the conclusion seemed to
be that in numpy we should just cover whatever is in the C standard. This
suggests `sinc` should not be a
On a more general note, if we change to a ufunc, it will get us stuck with
sinc being the normalized version, where the units of the input have to be
in the half-cycles preferred by signal-processing people rather than the
radians preferred by mathematicians.
In this respect, note that there is
> Otherwise, there should
>>> be no change except additional features of ufuncs and the move to a C
>>> implementation.
>>>
>>
> I see this is one of the functions that uses asanyarray, so what about
> impact on subclass behavior?
>
So, subclasses are passed on, as they are in ufuncs. In general,
> If we want to keep an "off" switch we might want to add some sort of API
> for exposing whether NumPy is using __array_function__ or not. Maybe
> numpy.__experimental_array_function_enabled__ = True, so you can just test
> `hasattr(numpy, '__experimental_array_function_enabled__')`? This is
>
;> At scikit-image we place a very strong emphasis on code simplicity and
>> readability, so I also share Marten's concerns about code getting too
>> complex. My impression reading the NEP was "whoa, this is hard, I'm glad
>> smarter people than me are working on this, I'm sure it'l
Hi All,
For 1.17, there has been a big effort, especially by Stephan, to make
__array_function__ sufficiently usable that it can be exposed. I think this
is great, and still like the idea very much, but its impact on the numpy
code base has gotten so big in the most recent PR (gh-13585) that I
On Sun, Apr 28, 2019 at 9:20 PM Stephan Hoyer wrote:
> On Sun, Apr 28, 2019 at 8:42 AM Marten van Kerkwijk <
> m.h.vankerkw...@gmail.com> wrote:
>
>> In summary, I think the guarantees should be as follows:
>> 1.If you call np.function and
>> - do not define _
28, 2019 at 1:38 PM Marten van Kerkwijk
> wrote:
> >
> > Hi Nathaniel,
> >
> > I'm a bit confused why` np.concatenate([1, 2], [3, 4])` would be a
> problem. In the current model, all (numpy) functions fall back to
> `ndarray.__array_function__`, which does know what to
Hi Nathaniel,
I'm a bit confused why` np.concatenate([1, 2], [3, 4])` would be a problem.
In the current model, all (numpy) functions fall back to
`ndarray.__array_function__`, which does know what to do with anything that
doesn't have `__array_function__`: it just coerces it to array. Am I
Hi Ralf,
Thanks for the comments and summary slides. I think you're
over-interpreting my wish to break people's code! I certainly believe - and
think we all agree - that we remain as committed as ever to ensure that
```
np.function(inputs)
```
continues to work just as before. My main comment is
Hi All,
I agree with Ralf that there are two discussions going on, but also with
Hameer that they are related, in that part of the very purpose of
__array_function__ was to gain freedom to experiment with implementations.
And in particular the freedom to *assume* that inputs are arrays so that we
On Thu, Apr 25, 2019 at 6:04 PM Stephan Hoyer wrote:
> On Thu, Apr 25, 2019 at 12:46 PM Marten van Kerkwijk <
> m.h.vankerkw...@gmail.com> wrote:
>
>
> It would be nice, though, if we could end up with also option 4 being
>> available, if only because code t
It seems we are adding to the wishlist! I see four so far:
1. Exposed in API, can be overridden with __array_ufunc__
2. One that converts everything to ndarray (or subclass); essentially the
current implementation;
3. One that does asduckarray
4. One that assumes all arguments are arrays.
Maybe
Hi All,
Reading the discussion again, I've gotten somewhat unsure that it is
helpful to formalize a way to call an implementation that we can and
hopefully will change. Why not just leave it at __wrapped__? I think the
name is no worse and it is more obvious that one relies on something
private.
Very much second Joe's recommendations - especially trying NASA - which has
an amazing track record of open data also in astronomy (and a history of
open source analysis tools, as well as the "Astrophysics Data System").
-- Marten
___
NumPy-Discussion
Hi Ralf,
I'm sorry to hear the proposal did not pass the first round, but, having
looked at it briefly (about as much time as I would have spent had I been
on the panel), I have to admit I am not surprised: it is nice but nice is
not enough for a competition like this.
Compared to what will have
I somewhat share Nathaniel's worry that by providing
`__numpy_implementation__` we essentially get stuck with the
implementations we have currently, rather than having the hoped-for freedom
to remove all the `np.asarray` coercion. In that respect, an advantage of
using `_wrapped` is that it is
It may be relevant at this point to mention that the padding bytes do *not*
get copied - so you get a blob with possibly quite a lot of uninitialized
data. If anything, that seems a recipe for unexpected results. Are there
non-contrived examples where you would *want* this uninitialized blob?
Certainly have done `np.random.normal(2*n).view('c16')` very often. Makes
sense to just allow it to be generated directly. -- Marten
On Sat, Mar 30, 2019 at 6:24 PM Hameer Abbasi
wrote:
> On Friday, Mar 29, 2019 at 6:03 PM, Hameer Abbasi <
> einstein.edi...@gmail.com> wrote:
>
> On Friday, Mar
Hi Frédéric,
The problem with any environment type variable is that when you disable the
dispatch functionality, all other classes that rely on being able to
override a numpy function stop working as well, i.e., the behaviour of
everything from dask to astropy's Quantity would depend on that
Fantastic!
-- Marten
On Wed, Feb 27, 2019 at 1:19 AM Stefan van der Walt
wrote:
> Hi everyone,
>
> The team at BIDS would like to take on an intern from Outreachy
> (https://www.outreachy.org), as part of our effort to grow the NumPy
> developer community.
>
> The internship is similar to a
Since numpy generally does not expose parts as modules, I think a separate
namespace for the exceptions makes sense. I prefer `np.exceptions` over
`np.errors`.
It might still make sense for that namespace to import from the different
parts, i.e., also have `np.core.exceptions`,
There is a long-standing request to require an explicit opt-in for
dtype=object: https://github.com/numpy/numpy/issues/5353
-- Marten
___
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion
Hi Juan,
I also use `broadcast_to` a lot, to save memory, but definitely have been
in a situation where in another piece of code the array is assumed to be
normal and writable (typically, that other piece was also written by me; so
much for awareness...). Fortunately, `broadcast_to` already
I see the logic in having the linear space be last, but one non-negligible
advantage of the default being the first axis is that whatever is produced
broadcasts properly against start and stop.
-- Marten
___
NumPy-Discussion mailing list
On Wed, Nov 14, 2018 at 1:21 PM Lars Grüter wrote:
>
> This reminds me of a function [1] I wrote which I think has a lot of
> similarities to what Stephan describes here. It is currently part of a
> PR to rewrite numpy.pad [2].
>
> If we start to write the equivalent internally, then perhaps we
Code being better than words: see https://github.com/numpy/numpy/pull/12388
for an implementation. The change in the code proper is very small, though
it is worrying that it causes two rather unrelated tests too fail (even if
arguably both tests were wrong).
Note that this does not give
Just to add: nothing conceptually is strange for start and end to be
arrays. Indeed, the code would work for arrays as is if it didn't check the
`step == 0` case to handle denormals (arguably an even less common case
than array-like inputs...), and made a trivial change to get the new axis
to be
Hi Eric,
Thanks very much for the detailed response; it is good to be reminded that
`MaskedArray` is used in a package that, indeed, (nearly?) all of us use!
But I do think that those of us who have been trying to change MaskedArray,
are generally good at making sure the tests continue to pass,
On Sat, Nov 10, 2018 at 5:39 PM Stephan Hoyer wrote:
> On Sat, Nov 10, 2018 at 2:22 PM Hameer Abbasi
> wrote:
>
>> To summarize, I think these are our options:
>>
>> 1. Change the behavior of np.anyarray() to check for an __anyarray__()
>> protocol. Change np.matrix.__anyarray__() to return a
> More broadly, it is only necessary to reject an argument type at the
> __array_function__ level if it defines __array_function__ itself, because
> that’s the only case where it would make a difference to return
> NotImplemented rather than trying (and failing) to call the overriden
> function
Hi Hameer,
I do not think we should change `asanyarray` itself to special-case matrix;
rather, we could start converting `asarray` to `asanyarray` and solve the
problems that produces for matrices in `matrix` itself (e.g., by overriding
the relevant function with `__array_function__`).
I think
ov 4, 2018 at 10:04 PM Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Sun, Nov 4, 2018 at 6:16 PM Stephan Hoyer wrote:
>>
>>> On Sun, Nov 4, 2018 at 10:32 AM Marten van Kerkwijk <
>>> m.h.vankerkw...@gmail.com> wr
Hi Mark,
Having an `out` might make sense. With present numpy, if you are really
dealing with a file or file-like object, you might consider using
`np.memmap` to access the data more directly. If it is something that looks
more like a buffer, `np.frombuffer` may be useful (that doesn't copy data,
Hi Stephan,
Another part of your reply worth considering, though slightly off topic for
the question here, of what to pass on in `types`:
On Sun, Nov 4, 2018 at 7:51 PM Stephan Hoyer wrote:
> On Sun, Nov 4, 2018 at 8:03 AM Marten van Kerkwijk <
> m.h.vankerkw...@gmail.com> w
On Sun, Nov 4, 2018 at 8:57 PM Stephan Hoyer wrote:
> On Sun, Nov 4, 2018 at 8:45 AM Marten van Kerkwijk <
> m.h.vankerkw...@gmail.com> wrote:
>
>> Does the above make sense? I realize that the same would be true for
>> `__array_ufunc__`, though there the situation
More specifically:
Should we change this? It is quite trivially done, but perhaps I am missing
>> a reason for omitting the non-override types.
>>
>
> Realistically, without these other changes in NumPy, how would this
> improve code using __array_function__? From a general purpose dispatching
>
Hi Stephan,
I fear my example about thinking about `ndarray.__array_function__`
distracted from the gist of my question, which was whether for
`__array_function__` implementations *generally* it wouldn't be handier to
have all unique types rather than just those that override
Hi Chuck,
For `__array_function__`, there was some discussion in
https://github.com/numpy/numpy/issues/12225 that for 1.16 we might want to
follow after all Nathaniel's suggestion of using an environment variable or
so to opt in (since introspection breaks on python2 with our wrapped
Hi again,
Another thought about __array_function__, this time about the
implementation for ndarray. In it, we currently check whether any of the
types define a (different) __array_function__, and, if so, give up. This
seems too strict: I think that, at least in principle, subclasses should be
Hi All,
While thinking about implementations using __array_function__, I wondered
whether the "types" argument passed on is not defined too narrowly.
Currently, it only contains the types of arguments that provide
__array_ufunc__, but wouldn't it make more sense to provide the unique
types of
The substitution principle is interesting (and, being trained as an
astronomer, not a computer scientist, I had not heard of it before). I
think matrix is indeed obviously wrong here (with indexing being more
annoying, but multiplication being a good example as well).
Perhaps more interesting as
1 - 100 of 198 matches
Mail list logo