On 31 Mar 2014 19:47, Chris Barker chris.bar...@noaa.gov wrote:
On Sat, Mar 29, 2014 at 3:08 PM, Nathaniel Smith n...@pobox.com wrote:
On 29 Mar 2014 20:57, Chris Barker chris.bar...@noaa.gov wrote:
I think this is somewhat open for discussion -- yes, it's odd, but in
the spirit
On Fri, Mar 28, 2014 at 9:30 PM, Sankarshan Mudkavi
smudk...@uwaterloo.ca wrote:
Hi Nathaniel,
1- You give as an example of naive datetime handling:
np.datetime64('2005-02-25T03:00Z')
np.datetime64('2005-02-25T03:00')
This IIUC is incorrect. The Z modifier is a timezone offset, and for
On 29 Mar 2014 20:57, Chris Barker chris.bar...@noaa.gov wrote:
I think this is somewhat open for discussion -- yes, it's odd, but in the
spirit of practicality beats purity, it seems OK. We could allow any TZ
specifier for that matter -- that's kind of how naive or local timezone
(non) handling
On 28 Mar 2014 05:00, Sankarshan Mudkavi smudk...@uwaterloo.ca wrote:
Hi all,
Apologies for the delay in following up, here is an expanded version of
the proposal, which hopefully clears up most of the details. I have not
included specific implementation details for the code, such as which
On Fri, Mar 28, 2014 at 4:58 PM, Robert Kern robert.k...@gmail.com wrote:
On Fri, Mar 28, 2014 at 2:54 PM, Sturla Molden sturla.mol...@gmail.com
wrote:
Matthew Brett matthew.br...@gmail.com wrote:
I see it should be possible to build a full blas and partial lapack
library with eigen [1]
On Fri, Mar 28, 2014 at 8:01 PM, Sturla Molden sturla.mol...@gmail.com wrote:
Matthew Brett matthew.br...@gmail.com wrote:
So - is Eigen our best option for optimized blas / lapack binaries on
64 bit Windows?
Maybe not:
http://gcdart.blogspot.de/2013/06/fast-matrix-multiply-and-ml.html
On 28 Mar 2014 20:26, Robert Kern robert.k...@gmail.com wrote:
It's only a problem in that the binary will not be BSD, and we do need to
communicate that appropriately. It will contain a significant component
that is MPL2 licensed. The terms that force us to include the link to the
Eigen source
, 2014 7:34 PM, Nathaniel Smith n...@pobox.com wrote:
On 28 Mar 2014 20:26, Robert Kern robert.k...@gmail.com wrote:
It's only a problem in that the binary will not be BSD, and we do need
to communicate that appropriately. It will contain a significant component
that is MPL2 licensed. The terms
I thought OpenBLAS is usually used with reference lapack?
On 28 Mar 2014 22:16, Matthew Brett matthew.br...@gmail.com wrote:
Hi,
On Fri, Mar 28, 2014 at 1:28 PM, Sturla Molden sturla.mol...@gmail.com
wrote:
Nathaniel Smith n...@pobox.com wrote:
If the only problem with eigen turns out
On Wed, Mar 26, 2014 at 7:34 PM, Julian Taylor
jtaylor.deb...@googlemail.com wrote:
as for using openblas by default in binary builds, no.
pthread openblas build is now fork safe which is great but it is still
not reliable enough for a default.
E.g. the current latest release 0.2.8 still has
On Sat, Mar 22, 2014 at 6:13 PM, Nathaniel Smith n...@pobox.com wrote:
After 88 emails we don't have a conclusion in the other thread (see
[1] for background). But we have to come to some conclusion or another
if we want @ to exist :-). So I'll summarize where the discussion
stands and let's
On Mon, Mar 24, 2014 at 11:58 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
On Mon, Mar 24, 2014 at 5:56 PM, Nathaniel Smith n...@pobox.com wrote:
On Sat, Mar 22, 2014 at 6:13 PM, Nathaniel Smith n...@pobox.com wrote:
After 88 emails we don't have a conclusion in the other thread (see
Hi all,
After 88 emails we don't have a conclusion in the other thread (see
[1] for background). But we have to come to some conclusion or another
if we want @ to exist :-). So I'll summarize where the discussion
stands and let's see if we can find some way to resolve this.
The fundamental
On Sat, Mar 22, 2014 at 7:59 PM, Robert Kern robert.k...@gmail.com wrote:
On Sat, Mar 22, 2014 at 6:13 PM, Nathaniel Smith n...@pobox.com wrote:
Hi all,
After 88 emails we don't have a conclusion in the other thread (see
[1] for background). But we have to come to some conclusion or another
On Thu, Mar 20, 2014 at 11:27 PM, Chris Barker chris.bar...@noaa.gov wrote:
* I think there are more or less three options:
1) a) don't have any timezone handling at all -- all datetime64s are UTC.
Always
b) don't have any timezone handling at all -- all datetime64s are
naive
On 20 Mar 2014 02:07, Sankarshan Mudkavi smudk...@uwaterloo.ca wrote:
I've written a rather rudimentary NEP, (lacking in technical details
which I will hopefully add after some further discussion and receiving
clarification/help on this thread).
Please let me know how to proceed and what you
On Thu, Mar 20, 2014 at 9:07 AM, Robert Kern robert.k...@gmail.com wrote:
I think the operator-overload-as-DSL use cases actually argue somewhat
for right-associativity. There is no lack of left-associative
operators for these use cases to choose from since they usually don't
have numeric or
On Wed, Mar 19, 2014 at 7:45 PM, Nathaniel Smith n...@pobox.com wrote:
Okay, I wrote a little script [1] to scan Python source files look for
things like 'dot(a, dot(b, c))' or 'dot(dot(a, b), c)', or the ndarray.dot
method equivalents. So what we get out is:
- a count of how many 'dot' calls
On Thu, Mar 20, 2014 at 1:36 PM, Dag Sverre Seljebotn
d.s.seljeb...@astro.uio.no wrote:
On 03/20/2014 02:26 PM, Dag Sverre Seljebotn wrote:
Order-of-matrix-multiplication is literally my textbook example of a
dynamic programming problem with complexity O(n^2) where n is number of
terms (as in,
On Tue, Mar 18, 2014 at 9:14 AM, Robert Kern robert.k...@gmail.com wrote:
On Tue, Mar 18, 2014 at 12:54 AM, Nathaniel Smith n...@pobox.com wrote:
On Sat, Mar 15, 2014 at 6:28 PM, Nathaniel Smith n...@pobox.com wrote:
Mathematica: instead of having an associativity, a @ b @ c gets
converted
On Sat, Mar 15, 2014 at 3:41 AM, Nathaniel Smith n...@pobox.com wrote:
I think we need to
know something about how often the Mat @ Mat @ vec type cases arise in
practice. How often do non-scalar * and np.dot show up in the same
expression? How often does it look like a * np.dot(b, c), and how
On Tue, Mar 18, 2014 at 9:50 AM, Eelco Hoogendoorn
hoogendoorn.ee...@gmail.com wrote:
To elaborate a little on such a more general and explicit method of
specifying linear operations (perhaps 'expressions with named axes' is a
good nomer to cover this topic).
[...]
This is a good topic to
On Tue, Mar 18, 2014 at 3:22 PM, Christophe Bal projet...@gmail.com wrote:
About weak-left. You need to define a priority of @ the matrix product
regarding to * the elementwise product because (A*B)@C A*(B@C)
This doesn't follow. (a / b) * c != a / (b * c), but / and * in
Python have the same
On Tue, Mar 18, 2014 at 5:26 PM, Jay Bourque jay.bour...@continuum.io wrote:
I was just about to submit some pull requests for fixes to the
_gufuncs_linalg module and discovered that it no longer exists. It looks
like it was removed in this commit. Is there any reason why it was removed
On 18 Mar 2014 17:32, Christophe Bal projet...@gmail.com wrote:
This is a different situation because / is indeed an hidden
multiplication : a/b = a*inv(b). The same is true for + and - :
a-b=a+opp(b). What I'm saying is that these operations * and / are indeed
of the very same j-kind.
This is
On Sat, Mar 15, 2014 at 4:32 AM, Nathaniel Smith n...@pobox.com wrote:
For this discussion let's assume @ can be taken for granted, and that
we can freely choose to either add @@ or not add @@ to the language.
The question is: which do we think makes Python a better language (for
us
On Sat, Mar 15, 2014 at 7:01 PM, Alexander Belopolsky ndar...@mac.com wrote:
On Sat, Mar 15, 2014 at 2:25 PM, Alexander Belopolsky ndar...@mac.com
wrote:
On Fri, Mar 14, 2014 at 11:41 PM, Nathaniel Smith n...@pobox.com wrote:
Here's the main blocker for adding a matrix multiply operator
On Mon, Mar 17, 2014 at 4:09 PM, Alexander Belopolsky ndar...@mac.com wrote:
On Mon, Mar 17, 2014 at 11:48 AM, Nathaniel Smith n...@pobox.com wrote:
One more question that I think should be answered by the PEP and may
influence the associativity decision is what happens if in an A @ B @ C
On Mon, Mar 17, 2014 at 9:38 PM, Christophe Bal projet...@gmail.com wrote:
Here is the translation. ;-)
Hello,
and what about something like that ?
a @ b @ c - (a @ b) @ c
a * b @ c - (a * b) @ c
a @ b * c - a @ (b * c)
Easy to remember: the *-product has priority regarding to the
On Mon, Mar 17, 2014 at 11:16 PM, Bago mrb...@gmail.com wrote:
Speaking of `@@`, would the relative precedence of @ vs * be the same as @@
vs **?
This is one of the concerns that made Guido leery of @@ (but only one
of them). Since we seem to be dropping @@:
On Mon, Mar 17, 2014 at 10:33 PM, Christophe Bal projet...@gmail.com wrote:
I think that weak-left is a little strange, just think a little of the
operators used by mathematicians that always follow a hierarchy.
Not sure what you mean -- I don't think most mathematicians think that
scalar and
On Tue, Mar 18, 2014 at 12:16 AM, Christophe Bal projet...@gmail.com wrote:
I think that weak-left is a little strange, just think
a little of the operators used by mathematicians that
always follow a hierarchy.
Not sure what you mean -- I don't think most mathematicians
think that scalar
On Sat, Mar 15, 2014 at 6:28 PM, Nathaniel Smith n...@pobox.com wrote:
Mathematica: instead of having an associativity, a @ b @ c gets
converted into mdot([a, b, c])
So, I've been thinking about this (thanks to @rfateman for pointing it
out), and wondering if Mathematica's approach is worth
On Mon, Mar 17, 2014 at 8:37 PM, Russell E. Owen ro...@uw.edu wrote:
After seeing all the traffic on this thread, I am in favor of
same-left because it is easiest to remember:
- It introduces no new rules.
- It is unambiguous. If we pick option 2 or 3 we have no strong reason
to favor one
On Sun, Mar 16, 2014 at 2:39 PM, Eelco Hoogendoorn
hoogendoorn.ee...@gmail.com wrote:
Note that I am not opposed to extra operators in python, and only mildly
opposed to a matrix multiplication operator in numpy; but let me lay out the
case against, for your consideration.
First of all, the
On Sun, Mar 16, 2014 at 4:33 PM, Eelco Hoogendoorn
hoogendoorn.ee...@gmail.com wrote:
Different people work on different code and have different experiences
here -- yours may or may be typical yours. Pauli did some quick checks
on scikit-learn nipy scipy, and found that in their test suites,
On Sun, Mar 16, 2014 at 4:37 PM, Colin J. Williams
cjwilliam...@gmail.com wrote:
I would like to see the case made for @. Yes, I know that Guido has
accepted the idea, but he has changed his mind before.
I'm not sure how to usefully respond to this, since, I already wrote a
~20 page document
On Sat, Mar 15, 2014 at 3:41 AM, Nathaniel Smith n...@pobox.com wrote:
Hi all,
Here's the main blocker for adding a matrix multiply operator '@' to Python:
we need to decide what we think its precedence and associativity should be.
Another data point that might be useful:
Matlab: same-left
Hi Chris,
On Sat, Mar 15, 2014 at 4:15 AM, Chris Laumann chris.laum...@gmail.com wrote:
Hi all,
Let me preface my two cents by saying that I think the best part of @ being
accepted is the potential for deprecating the matrix class — the syntactic
beauty of infix for matrix multiply is a nice
On Sat, Mar 15, 2014 at 6:33 PM, Joe Kington joferking...@gmail.com wrote:
On Sat, Mar 15, 2014 at 1:28 PM, Nathaniel Smith n...@pobox.com wrote:
On Sat, Mar 15, 2014 at 3:41 AM, Nathaniel Smith n...@pobox.com wrote:
Hi all,
Here's the main blocker for adding a matrix multiply operator
On 15 Mar 2014 19:02, Charles R Harris charlesr.har...@gmail.com wrote:
Just to throw something new into the mix
u@v@w = u@(v@w) -- u@v is a dyadic matrix
u@v -- is a scalar
It would be nice if u@v@None, or some such, would evaluate as a dyad. Or
else we will still need the concept of row
On Sat, Mar 15, 2014 at 1:13 PM, Alan G Isaac alan.is...@gmail.com wrote:
On 3/15/2014 12:32 AM, Nathaniel Smith wrote:
I know you were worried
about losing the .I attribute on matrices if switching to ndarrays for
teaching -- given that ndarray will probably not get a .I attribute,
how
Well, that was fast. Guido says he'll accept the addition of '@' as an
infix operator for matrix multiplication, once some details are ironed
out:
https://mail.python.org/pipermail/python-ideas/2014-March/027109.html
http://legacy.python.org/dev/peps/pep-0465/
Specifically, we need to figure
Hi all,
Here's the main blocker for adding a matrix multiply operator '@' to
Python: we need to decide what we think its precedence and associativity
should be. I'll explain what that means so we're on the same page, and what
the choices are, and then we can all argue about it. But even better
On Sat, Mar 15, 2014 at 3:18 AM, Chris Laumann chris.laum...@gmail.com wrote:
That’s great.
Does this mean that, in the not-so-distant future, the matrix class will go
the way of the dodos? I have had more subtle to fix bugs sneak into code b/c
something returns a matrix instead of an
Hi all,
Here's the second thread for discussion about Guido's concerns about
PEP 465. The issue here is that PEP 465 as currently written proposes
two new operators, @ for matrix multiplication and @@ for matrix power
(analogous to * and **):
http://legacy.python.org/dev/peps/pep-0465/
The
Hi all,
The proposal to add an infix operator to Python for matrix
multiplication is nearly ready for its debut on python-ideas; so if
you want to look it over first, just want to check out where it's
gone, then now's a good time:
https://github.com/numpy/numpy/pull/4351
The basic idea here is
On Thu, Mar 13, 2014 at 1:03 AM, Alan G Isaac alan.is...@gmail.com wrote:
On 3/12/2014 6:04 PM, Nathaniel Smith wrote:
https://github.com/numpy/numpy/pull/4351
The Semantics section still begins with 0d, then 2d, then 1d, then nd.
Given the context of the proposal, the order should
On 11 Mar 2014 13:28, Paul Brossier p...@piem.org wrote:
If I understand correctly, the current version is the one installed on
the user system. So using NPY_API_VERSION would mean this code should
work with any version of numpy. I guess this is what I want (I would
even expect this to be the
On 11 Mar 2014 14:25, Paul Brossier p...@piem.org wrote:
On 11/03/2014 10:49, Nathaniel Smith wrote:
On 11 Mar 2014 13:28, Paul Brossier p...@piem.org
mailto:p...@piem.org wrote:
If I understand correctly, the current version is the one installed on
the user system. So using
On Thu, Mar 6, 2014 at 5:17 AM, Sturla Molden sturla.mol...@gmail.com wrote:
Nathaniel Smith n...@pobox.com wrote:
3. Using Cython in the numpy core
The numpy core contains tons of complicated C code implementing
elaborate operations like indexing, casting, ufunc dispatch, etc. It
would
On Thu, Mar 6, 2014 at 9:11 AM, David Cournapeau courn...@gmail.com wrote:
On Wed, Mar 5, 2014 at 9:11 PM, Nathaniel Smith n...@pobox.com wrote:
So this project would have the following goals, depending on how
practical this turns out to be: (1) produce a hacky proof-of-concept
system
On Wed, Mar 5, 2014 at 4:45 PM, Sebastian Berg
sebast...@sipsolutions.net wrote:
Hi all,
in Pull Request https://github.com/numpy/numpy/pull/3864 Neol Dawe
suggested adding new parameters to our `cov` and `corrcoef` functions to
implement weights, which already exists for `average` (the PR
On Mon, Mar 3, 2014 at 7:20 PM, Julian Taylor
jtaylor.deb...@googlemail.com wrote:
hi,
as the numpy gsoc topic page is a little short on options I was thinking
about adding two topics for interested students. But as I have no
experience with gsoc or mentoring and the ideas are not very
On Sat, Feb 22, 2014 at 7:09 PM, Pauli Virtanen p...@iki.fi wrote:
23.02.2014 00:03, Nathaniel Smith kirjoitti:
Currently numpy's 'dot' acts a bit weird for ndim2 or ndim1. In
practice this doesn't usually matter much, because these are very
rarely used. But, I would like to nail down
On Sat, Feb 22, 2014 at 3:55 PM, Sturla Molden sturla.mol...@gmail.com wrote:
On 20/02/14 17:57, Jurgen Van Gael wrote:
Hi All,
I run Mac OS X 10.9.1 and was trying to get OpenBLAS working for numpy.
I've downloaded the OpenBLAS source and compiled it (thanks to Olivier
Grisel).
How?
$
[Apologies for wide distribution -- please direct followups to either
the github PR linked below, or else numpy-discussion@scipy.org]
After the numpy-discussion thread about np.matrix a week or so back, I
got curious and read the old PEPs that attempted to add better
matrix/elementwise operators
Hi all,
Currently numpy's 'dot' acts a bit weird for ndim2 or ndim1. In
practice this doesn't usually matter much, because these are very
rarely used. But, I would like to nail down the behaviour so we can
say something precise in the matrix multiplication PEP. So here's one
proposal.
# CURRENT:
On Sat, Feb 22, 2014 at 5:17 PM, Matthew Brett matthew.br...@gmail.com wrote:
The discussion might become confusing in the conflation of:
* backward incompatible changes to dot
* coherent behavior to propose in a PEP
Right, I definitely am asking about how we think the ideal dot
operator
If you send a patch that deprecates dot's current behaviour for ndim2,
we'll probably merge it. (We'd like it to function like you suggest, for
consistency with other gufuncs. But to get there we have to deprecate the
current behaviour first.)
While I'm wishing for things I'll also mention that
]) still seems
like it might be a good idea...? I have mixed feelings about it -- one
less item cluttering up the namespace, but it is weird and magical to
have two totally different calling conventions for the same function.
-n
On Thu, Feb 20, 2014 at 4:02 PM, Nathaniel Smith n...@pobox.com
Hey all,
Just a heads up: thanks to the tireless work of Olivier Grisel, the
OpenBLAS development branch is now fork-safe when built with its default
threading support. (It is still not thread-safe when built using OMP for
threading and gcc, but this is not the default.)
Gory details:
Perhaps integer power should raise an error on negative powers? That way
people will at least be directed to use arr ** -1.0 instead of silently
getting nonsense from arr ** -1.
On 18 Feb 2014 06:57, Robert Kern robert.k...@gmail.com wrote:
On Tue, Feb 18, 2014 at 11:44 AM, Sturla Molden
On 18 Feb 2014 07:07, Robert Kern robert.k...@gmail.com wrote:
On Tue, Feb 18, 2014 at 12:00 PM, Nathaniel Smith n...@pobox.com wrote:
Perhaps integer power should raise an error on negative powers? That way
people will at least be directed to use arr ** -1.0 instead of silently
getting
On 18 Feb 2014 10:21, Julian Taylor jtaylor.deb...@googlemail.com wrote:
On Mon, Feb 17, 2014 at 9:42 PM, Nathaniel Smith n...@pobox.com wrote:
On 17 Feb 2014 15:17, Sturla Molden sturla.mol...@gmail.com wrote:
Julian Taylor jtaylor.deb...@googlemail.com wrote:
When an array
On 18 Feb 2014 11:05, Charles R Harris charlesr.har...@gmail.com wrote:
Hi All,
There is an old ticket, #1499, that suggest adding a segment_axis
function.
def segment_axis(a, length, overlap=0, axis=None, end='cut', endvalue=0):
Generate a new array that chops the given array along the
On 18 Feb 2014 12:04, Charles R Harris charlesr.har...@gmail.com wrote:
Where does 'shingle' come from. I can see the analogy but haven't seen
that as a technical term.
It just seems like a good name :-).
-n
___
NumPy-Discussion mailing list
So to be clear - what's being suggested is that code like this will be
deprecated in 1.9, and then in some future release break:
slices = []
for i in ...:
slices.append(make_slice(...))
subarray = arr[slices]
Instead, you will have to do:
subarray = arr[tuple(slices)]
And the reason is
On 17 Feb 2014 15:17, Sturla Molden sturla.mol...@gmail.com wrote:
Julian Taylor jtaylor.deb...@googlemail.com wrote:
When an array is created it tries to get its memory from the cache and
when its deallocated it returns it to the cache.
Good idea, however there is already a C function
On Mon, Feb 17, 2014 at 3:55 PM, Stefan Seefeld ste...@seefeld.name wrote:
On 02/17/2014 03:42 PM, Nathaniel Smith wrote:
Another optimization we should consider that might help a lot in the
same situations where this would help: for code called from the
cpython eval loop, it's afaict possible
On Sun, Feb 9, 2014 at 4:59 PM, alex argri...@ncsu.edu wrote:
Hello list,
I wrote this mini-nep for numpy but I've been advised it is more
appropriate for discussion on the list.
The ``numpy.matrix`` API provides a low barrier to using Python
for linear algebra, just as the pre-3 Python
On Mon, Feb 10, 2014 at 11:16 AM, Alexander Belopolsky ndar...@mac.com wrote:
On Sun, Feb 9, 2014 at 4:59 PM, alex argri...@ncsu.edu wrote:
On the other hand, it really needs to be deprecated.
While numpy.matrix may have its problems, a NEP should list a better
rationale than the above to
On Mon, Feb 10, 2014 at 12:02 PM, Matthieu Brucher
matthieu.bruc...@gmail.com wrote:
Yes, but these will be scipy.sparse matrices, nothing to do with numpy
(dense) matrices.
Unfortunately when scipy.sparse matrices interact with dense ndarrays
(e.g., sparse matrix * dense vector), then you
On Fri, Jan 31, 2014 at 3:14 PM, Chris Laumann chris.laum...@gmail.com wrote:
Current scipy superpack for osx so probably pretty close to master.
What does numpy.__version__ say?
-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
On Fri, Jan 31, 2014 at 4:29 PM, Benjamin Root ben.r...@ou.edu wrote:
Just to chime in here about the SciPy Superpack... this distribution tracks
the master branch of many projects, and then puts out releases, on the
assumption that master contains pristine code, I guess. I have gone down
On Wed, Jan 29, 2014 at 7:39 PM, Joseph McGlinchy jmcglin...@esri.com wrote:
Upon further investigation, I do believe it is within the scipy code where
there is a leak. I commented out my call to processBinaryImage(), which is
all scipy code calls, and my memory usage remains flat with
There is no reliable way to predict how much memory an arbitrary numpy
operation will need, no. However, in most cases the main memory cost will
be simply the need to store the input and output arrays; for large arrays,
all other allocations should be negligible.
The most effective way to avoid
On 24 Jan 2014 15:57, Chris Barker - NOAA Federal chris.bar...@noaa.gov
wrote:
c = a + b: 3N
c = a + 2*b: 4N
Does python garbage collect mid-expression? I.e. :
C = (a + 2*b) + b
4 or 5 N?
It should be collected as soon as the reference gets dropped, so 4N. (This
is the advantage of a
Yes.
On 24 Jan 2014 17:19, Dinesh Vadhia dineshbvad...@hotmail.com wrote:
So, with the example case, the approximate memory cost for an in-place
operation would be:
A *= B : 2N
But, if the original A or B is to remain unchanged then it will be:
C = A * B : 3N ?
On Fri, Jan 24, 2014 at 10:29 PM, Chris Barker chris.bar...@noaa.gov wrote:
On Fri, Jan 24, 2014 at 8:25 AM, Nathaniel Smith n...@pobox.com wrote:
If your arrays are big enough that you're worried that making a stray copy
will ENOMEM, then you *shouldn't* have to worry about fragmentation
On 25 Jan 2014 00:05, Sebastian Berg sebast...@sipsolutions.net wrote:
Hi all,
in https://github.com/numpy/numpy/pull/3514 I proposed some changes to
the comparison operators. This includes:
1. Comparison with None will broadcast in the future, so that `arr ==
None` will actually compare
Hey all,
We have a PR languishing that fixes np.irr to handle negative rate-of-returns:
https://github.com/numpy/numpy/pull/4210
I don't even know what IRR stands for, and it seems rather confusing
from the discussion there. Anyone who knows something about the issues
is invited to speak up...
On 21 Jan 2014 11:13, Oscar Benjamin oscar.j.benja...@gmail.com wrote:
If the Numpy array would manage the buffers itself then that per string
memory
overhead would be eliminated in exchange for an 8 byte pointer and at
least 1
byte to represent the length of the string (assuming you can
On 21 Jan 2014 17:28, David Goldsmith d.l.goldsm...@gmail.com wrote:
Am I the only one who feels that this (very important--I'm being sincere,
not sarcastic) thread has matured and specialized enough to warrant it's
own home on the Wiki?
Sounds plausible, perhaps you could write up such a
On Mon, Jan 20, 2014 at 10:28 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
On Mon, Jan 20, 2014 at 2:27 PM, Oscar Benjamin oscar.j.benja...@gmail.com
wrote:
On Jan 20, 2014 8:35 PM, Charles R Harris charlesr.har...@gmail.com
wrote:
I think we may want something like PEP 393.
On Fri, Jan 10, 2014 at 9:18 AM, Julian Taylor
jtaylor.deb...@googlemail.com wrote:
On Fri, Jan 10, 2014 at 3:48 AM, Nathaniel Smith n...@pobox.com wrote:
Also, none of the Py* interfaces implement calloc(), which is annoying
because it messes up our new optimization of using calloc
On Thu, Jan 9, 2014 at 3:30 PM, Julian Taylor
jtaylor.deb...@googlemail.com wrote:
On Thu, Jan 9, 2014 at 3:50 PM, Frédéric Bastien no...@nouiz.org wrote:
How hard would it be to provide the choise to the user? We could
provide 2 functions like: fma_fast() fma_prec() (for precision)? Or
this
On Thu, Jan 9, 2014 at 11:21 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
Apropos Julian's changes to use the PyObject_* allocation suite for some
parts of numpy, I posted the following
I think numpy memory management is due a cleanup. Currently we have
PyDataMem_*
PyDimMem_*
On Wed, Jan 8, 2014 at 12:13 PM, Julian Taylor
jtaylor.deb...@googlemail.com wrote:
On 18.07.2013 15:36, Nathaniel Smith wrote:
On Wed, Jul 17, 2013 at 5:57 PM, Frédéric Bastien no...@nouiz.org wrote:
On the usefulness of doing only 1 memory allocation, on our old gpu ndarray,
we where doing 2
On 1 Jan 2014 13:57, Bart Baker bart...@gmail.com wrote:
Hello,
I'm having issues with performing operations on an array in C and
passing it back to Python. The array values seem to become unitialized
upon being passed back to Python. My first attempt involved initializing
the array in C as
On Thu, Dec 5, 2013 at 7:33 PM, josef.p...@gmail.com wrote:
On Thu, Dec 5, 2013 at 5:37 PM, Sebastian Berg
sebast...@sipsolutions.net wrote:
Hey,
there was a discussion that for numpy booleans math operators +,-,* (and
the unary -), while defined, are not very helpful. I have set up a quick
On Fri, Dec 6, 2013 at 11:55 AM, Alexander Belopolsky ndar...@mac.com wrote:
On Fri, Dec 6, 2013 at 1:46 PM, Alan G Isaac alan.is...@gmail.com wrote:
On 12/6/2013 1:35 PM, josef.p...@gmail.com wrote:
unary versus binary minus
Oh right; I consider binary `-` broken for
Boolean arrays.
, Nathaniel Smith n...@pobox.com wrote:
On Fri, Dec 6, 2013 at 11:55 AM, Alexander Belopolsky ndar...@mac.com
wrote:
On Fri, Dec 6, 2013 at 1:46 PM, Alan G Isaac alan.is...@gmail.com
wrote:
On 12/6/2013 1:35 PM, josef.p...@gmail.com wrote:
unary versus binary minus
Oh right; I
I think that would be great. Technically what you'd want is a gufunc.
-n
On Mon, Dec 2, 2013 at 9:44 AM, Daniele Nicolodi dani...@grinta.net wrote:
Hello,
there would be interest in adding a floating point accurate summation
function like Python's math.fsum() in the form of an ufunc to
On Mon, Dec 2, 2013 at 11:35 AM, Neal Becker ndbeck...@gmail.com wrote:
I don't think that behavior is acceptable.
That's... too bad? I'm not sure what your objection actually is.
It's an intentional change (though disabled by default in 1.8), and a
necessary step to rationalizing our
On Mon, Dec 2, 2013 at 3:15 PM, Jim Bosch tallji...@gmail.com wrote:
If your arrays are contiguous, you don't really need the strides (use the
itemsize instead). How is ndarray broken by this?
ndarray is broken by this change because it expects the stride to be a
multiple of the itemsize (I
On Tue, Nov 26, 2013 at 11:54 AM, Peter Rennert p.renn...@cs.ucl.ac.uk wrote:
Hi,
I as the title says, I am looking for a way to set in python the base of
an ndarray to an object.
Use case is porting qimage2ndarray to PySide where I want to do
something like:
In [1]: from PySide import
On Tue, Nov 26, 2013 at 1:37 PM, Peter Rennert p.renn...@cs.ucl.ac.uk wrote:
I probably did something wrong, but it does not work how I tried it. I
am not sure if you meant it like this, but I tried to subclass from
ndarray first, but then I do not have access to __array_interface__. Is
this
On Tue, Nov 26, 2013 at 2:55 PM, Peter Rennert p.renn...@cs.ucl.ac.uk wrote:
Btw, I just wanted to file a bug at PySide, but it might be alright at
their end, because I can do this:
from PySide import QtGui
image = QtGui.QImage('/home/peter/code/pyTools/sandbox/images/faceDemo.jpg')
a =
On Sun, Nov 24, 2013 at 5:32 PM, Yaroslav Halchenko
li...@onerussian.com wrote:
On Tue, 15 Oct 2013, Nathaniel Smith wrote:
What do you have to lose?
btw -- fresh results are here http://yarikoptic.github.io/numpy-vbench/ .
I have tuned benchmarking so it now reflects the best performance
601 - 700 of 1301 matches
Mail list logo