Thanks Eric.
Also relevant: https://github.com/numba/numba/issues/909
Looks like Numba has found a way to avoid this edge case.
On Monday, April 4, 2016, Eric Firing <efir...@hawaii.edu> wrote:
> On 2016/04/04 9:23 AM, T J wrote:
>
>> I'm on NumPy 1.10.4 (mkl).
>>
I'm on NumPy 1.10.4 (mkl).
>>> np.uint(3) // 2 # 1.0
>>> 3 // 2 # 1
Is this behavior expected? It's certainly not desired from my perspective.
If this is not a bug, could someone explain the rationale to me.
Thanks.
___
NumPy-Discussion mailing
It does, but it is not portable. That's why I was hoping NumPy might think
about supporting more rounding algorithms.
On Thu, Oct 2, 2014 at 10:00 PM, John Zwinck jzwi...@gmail.com wrote:
On 3 Oct 2014 07:09, T J tjhn...@gmail.com wrote:
Any bites on this?
On Wed, Sep 24, 2014 at 12:23
Hi, I'm using NumPy 1.8.2:
In [1]: np.array(0) / np.array(0)
Out[1]: 0
In [2]: np.array(0) / np.array(0.0)
Out[2]: nan
In [3]: np.array(0.0) / np.array(0)
Out[3]: nan
In [4]: np.array(0.0) / np.array(0.0)
Out[4]: nan
In [5]: 0/0
Any bites on this?
On Wed, Sep 24, 2014 at 12:23 PM, T J tjhn...@gmail.com wrote:
Is there a ufunc for rounding away from zero? Or do I need to do
x2 = sign(x) * ceil(abs(x))
whenever I want to round away from zero? Maybe the following is better?
x_ceil = ceil(x)
x_floor
Is there a ufunc for rounding away from zero? Or do I need to do
x2 = sign(x) * ceil(abs(x))
whenever I want to round away from zero? Maybe the following is better?
x_ceil = ceil(x)
x_floor = floor(x)
x2 = where(x = 0, x_ceil, x_floor)
Python's round function goes away from
What is the status of:
https://github.com/numpy/numpy/blob/master/doc/neps/missing-data.rst
and of missing data in Numpy, more generally?
Is np.ma.array still the state-of-the-art way to handle missing data? Or
has something better and more comprehensive been put together?
Prior to 1.7, I had working compatibility code such as the following:
if has_good_functions:
# http://projects.scipy.org/numpy/ticket/1096
from numpy import logaddexp, logaddexp2
else:
logaddexp = vectorize(_logaddexp, otypes=[numpy.float64])
logaddexp2 = vectorize(_logaddexp2,
On Tue, Mar 12, 2013 at 9:59 AM, Bradley M. Froehle
brad.froe...@gmail.comwrote:
T J:
You may want to look into `numpy.frompyfunc` (
http://docs.scipy.org/doc/numpy/reference/generated/numpy.frompyfunc.html
).
Yeah that's better, but it doesn't respect the output type of the function
On Fri, Oct 12, 2012 at 1:04 PM, Sturla Molden stu...@molden.no wrote:
I'm still rather sure GIS functionality belongs in scipy.spatial instead
of numpy.
From the link:
FocalMax
Finds the highest value for each cell location on an input grid within
a specified neighborhood and sends it to
On Sat, Jun 30, 2012 at 1:26 PM, josef.p...@gmail.com wrote:
just some statistics
http://stackoverflow.com/questions/tagged/numpy
769 followers, 2,850 questions tagged
a guess: average response time for regular usage question far less than an
hour
On Sat, Jun 30, 2012 at 1:50 PM, srean srean.l...@gmail.com wrote:
Anecdotal data-point:
I have been happy with SO in general. It works for certain types of
queries very well. OTOH if the answer to the question is known only to
a few and he/she does not happen to be online at time the
On Thu, Jun 28, 2012 at 3:23 PM, Fernando Perez fperez@gmail.comwrote:
On Thu, Jun 28, 2012 at 3:06 PM, srean srean.l...@gmail.com wrote:
What I like about having two lists is that on one hand it does not
prevent me or you from participating in both, on the other hand it
allows those
On Wed, May 23, 2012 at 4:16 PM, Kathleen M Tacina
kathleen.m.tac...@nasa.gov wrote:
**
On Wed, 2012-05-23 at 17:31 -0500, Nathaniel Smith wrote:
On Wed, May 23, 2012 at 10:53 PM, Travis Oliphant tra...@continuum.io
wrote: To be clear, I'm not opposed to the change, and it looks like we
On Fri, May 11, 2012 at 1:12 PM, Mark Wiebe mwwi...@gmail.com wrote:
On Fri, May 11, 2012 at 2:18 PM, Pauli Virtanen p...@iki.fi wrote:
11.05.2012 17:54, Frédéric Bastien kirjoitti:
In Theano we use a view, but that is not relevant as it is the
compiler that tell what is inplace. So this
On Wed, Feb 15, 2012 at 12:45 PM, Alan G Isaac alan.is...@gmail.com wrote:
for the core developers. The right way to produce a
governance structure is to make concrete proposals and
show how these proposals are in the interest of the
*developers* (as well as of the users).
At this point,
On Fri, Nov 4, 2011 at 11:59 AM, Pauli Virtanen p...@iki.fi wrote:
I have a feeling that if you don't start by mathematically defining the
scalar operations first, and only after that generalize them to arrays,
some conceptual problems may follow.
Yes. I was going to mention this point as
On Fri, Nov 4, 2011 at 1:03 PM, Gary Strangman
str...@nmr.mgh.harvard.eduwrote:
To push this forward a bit, can I propose that IGNORE behave as: PnC
x = np.array([1, 2, 3])
y = np.array([10, 20, 30])
ignore(x[2])
x
[1, IGNORED(2), 3]
x + 2
[3, IGNORED(4), 5]
x + y
[11,
On Fri, Nov 4, 2011 at 2:41 PM, Pauli Virtanen p...@iki.fi wrote:
04.11.2011 20:49, T J kirjoitti:
[clip]
To push this forward a bit, can I propose that IGNORE behave as: PnC
The *n* classes can be a bit confusing in Python:
### PnC
x = np.array([1, 2, 3])
y = np.array([4, 5, 6
On Fri, Nov 4, 2011 at 2:29 PM, Nathaniel Smith n...@pobox.com wrote:
On Fri, Nov 4, 2011 at 1:22 PM, T J tjhn...@gmail.com wrote:
I agree that it would be ideal if the default were to skip IGNORED
values,
but that behavior seems inconsistent with its propagation properties
(such
as when
On Fri, Nov 4, 2011 at 3:38 PM, Nathaniel Smith n...@pobox.com wrote:
On Fri, Nov 4, 2011 at 3:08 PM, T J tjhn...@gmail.com wrote:
On Fri, Nov 4, 2011 at 2:29 PM, Nathaniel Smith n...@pobox.com wrote:
Continuing my theme of looking for consensus first... there are
obviously a ton of ugly
, as an example
from T J shows:
a = 1
a += IGNORE(3)
# - a := a + IGNORE(3)
# - a := IGNORE(4)
# - a == IGNORE(1)
which is different from
a = 1 + IGNORE(3)
# - a == IGNORE(4)
Damn, it seemed so good. Probably anything expect destructive assignment
leads to problems like this with propagating
On Fri, Nov 4, 2011 at 6:31 PM, Pauli Virtanen p...@iki.fi wrote:
05.11.2011 00:14, T J kirjoitti:
[clip]
a = 1
a += 2
a += IGNORE
b = 1 + 2 + IGNORE
I think having a == b is essential. If they can be different, that will
only lead to confusion. On this point
On Fri, Nov 4, 2011 at 8:03 PM, Nathaniel Smith n...@pobox.com wrote:
On Fri, Nov 4, 2011 at 7:43 PM, T J tjhn...@gmail.com wrote:
On Fri, Nov 4, 2011 at 6:31 PM, Pauli Virtanen p...@iki.fi wrote:
An acid test for proposed rules: given two arrays `a` and `b`,
a = [1, 2, IGNORED(3
I recently put together a Cython example which uses the neighborhood
iterator. It was trickier than I thought it would be, so I thought to
share it with the community. The function takes a 1-dimensional array
and returns a 2-dimensional array of neighborhoods in the original
area. This is
While reading the documentation for the neighborhood iterator, it
seems that it can only handle rectangular neighborhoods. Have I
understood this correctly? If it is possible to do non-rectangular
regions, could someone post an example/sketch of how to do this?
On Mon, Apr 25, 2011 at 9:57 AM, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
We thought that we could simply have a PRNG per object, as in:
def __init__(self, prng=None):
if prng is None:
prng = np.random.RandomState()
self.prng = prng
I don't like this
On Wed, Jan 26, 2011 at 5:02 PM, Joshua Holbrook
josh.holbr...@gmail.com wrote:
Ah, sorry for misunderstanding. That would actually be very difficult,
as the iterator required a fair bit of fixes and adjustments to the core.
The new_iterator branch should be 1.5 ABI compatible, if that helps.
Hi,
I tried upgrading today and had trouble building numpy (after rm -rf
build). My full build log is here:
http://www.filedump.net/dumped/build1274454454.txt
If someone can point me in the right direction, I'd appreciate it very
much. To excerpts from the log file:
Running from numpy
On Fri, May 21, 2010 at 8:51 AM, Pauli Virtanen p...@iki.fi wrote:
Fri, 21 May 2010 08:09:55 -0700, T J wrote:
I tried upgrading today and had trouble building numpy (after rm -rf
build). My full build log is here:
http://www.filedump.net/dumped/build1274454454.txt
Your SVN checkout
On Mon, May 10, 2010 at 8:37 PM, josef.p...@gmail.com wrote:
I went googling and found a new interpretation
numpy.random.pareto is actually the Lomax distribution also known as Pareto 2,
Pareto (II) or Pareto Second Kind distribution
Great!
So, from this it looks like numpy.random does
On Sun, May 9, 2010 at 4:49 AM, josef.p...@gmail.com wrote:
I think this is the same point, I was trying to make last year.
Instead of renormalizing, my conclusion was the following,
(copied from the mailinglist August last year)
my conclusion:
-
What
The docstring for np.pareto says:
This is a simplified version of the Generalized Pareto distribution
(available in SciPy), with the scale set to one and the location set to
zero. Most authors default the location to one.
and also:
The probability density for the Pareto
Hi,
Is there a way to sort the columns in an array? I need to sort it so
that I can easily go through and keep only the unique columns.
ndarray.sort(axis=1) doesn't do what I want as it destroys the
relative ordering between the various columns. For example, I would
like:
[[2,1,3],
[3,5,1],
On Thu, May 6, 2010 at 10:36 AM, josef.p...@gmail.com wrote:
there is a thread last august on unique rows which might be useful,
and a thread in Dec 2008 for sorting rows
something like
np.unique1d(c.view([('',c.dtype)]*c.shape[1])).view(c.dtype).reshape(-1,c.shape[1])
maybe it's
On Mon, Apr 26, 2010 at 10:03 AM, Charles R Harris
charlesr.har...@gmail.com wrote:
On Mon, Apr 26, 2010 at 10:55 AM, Charles R Harris
charlesr.har...@gmail.com wrote:
Hi All,
We need to make a decision for ticket #1123 regarding what nansum should
return when all values are nan. At some
On Mon, Apr 5, 2010 at 11:28 AM, Robert Kern robert.k...@gmail.com wrote:
On Mon, Apr 5, 2010 at 13:26, Erik Tollerud erik.tolle...@gmail.com wrote:
Hmm, unfortunate. So the best approach then is probably just to tell
people to install numpy first, then my package?
Yup.
And really, this
Hi,
I'm getting some strange behavior with logaddexp2.reduce:
from itertools import permutations
import numpy as np
x = np.array([-53.584962500721154, -1.5849625007211563, -0.5849625007211563])
for p in permutations([0,1,2]):
print p, np.logaddexp2.reduce(x[list(p)])
Essentially, the result
On Wed, Mar 31, 2010 at 10:30 AM, T J tjhn...@gmail.com wrote:
Hi,
I'm getting some strange behavior with logaddexp2.reduce:
from itertools import permutations
import numpy as np
x = np.array([-53.584962500721154, -1.5849625007211563, -0.5849625007211563])
for p in permutations([0,1,2
On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
Looks like roundoff error.
So this is expected behavior?
In [1]: np.logaddexp2(-1.5849625007211563, -53.584962500721154)
Out[1]: -1.5849625007211561
In [2]: np.logaddexp2(-0.5849625007211563,
On Wed, Mar 31, 2010 at 3:36 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
So this is expected behavior?
In [1]: np.logaddexp2(-1.5849625007211563, -53.584962500721154)
Out[1]: -1.5849625007211561
In [2]: np.logaddexp2(-0.5849625007211563, -53.584962500721154)
Out[2]: nan
I don't
On Wed, Mar 31, 2010 at 3:38 PM, David Warde-Farley d...@cs.toronto.edu wrote:
Unfortunately there's no good way of getting around order-of-
operations-related rounding error using the reduce() machinery, that I
know of.
That seems reasonable, but receiving a nan, in this case, does not.
Are
On Wed, Mar 31, 2010 at 7:06 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
That is a 32 bit kernel, right?
Correct.
Regarding the config.h, which config.h? I have a numpyconfig.h.
Which compilation options should I obtain and how? When I run
setup.py, I see:
C compiler: gcc
When passing in a list of longs and asking that the dtype be a float
(yes, losing precision), the error message is uninformative whenever
the long is larger than the largest float.
x =
Hi,
Suppose I have an array of shape: (n, k, k). In this case, I have n
k-by-k matrices. My goal is to compute the product of a (potentially
large) user-specified selection (with replacement) of these matrices.
For example,
x = [0,1,2,1,3,3,2,1,3,2,1,5,3,2,3,5,2,5,3,2,1,3,5,6]
says that I
Is there a better way to achieve the following, perhaps without the
python for loop?
x.shape
(1,3)
y.shape
(1,3)
z = empty(len(x))
for i in range(1):
...z[i] = dot(x[i], y[i])
...
___
NumPy-Discussion mailing list
On Mon, Sep 7, 2009 at 3:27 PM, T Jtjhn...@gmail.com wrote:
On Mon, Sep 7, 2009 at 7:09 AM, Hans-Andreas Engeleng...@deshaw.com wrote:
If you wish to avoid the extra memory allocation implied by `x*y'
and get a ~4x speed-up, you can use a generalized ufunc
(numpy = 1.3, stolen from the
On Mon, Sep 7, 2009 at 3:43 PM, T Jtjhn...@gmail.com wrote:
Or perhaps I am just being dense.
Yes. I just tried to reinvent standard matrix multiplication.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
On Mon, Sep 7, 2009 at 7:09 AM, Hans-Andreas Engeleng...@deshaw.com wrote:
If you wish to avoid the extra memory allocation implied by `x*y'
and get a ~4x speed-up, you can use a generalized ufunc
(numpy = 1.3, stolen from the testcases):
z = numpy.core.umath_tests.inner1d(x, y)
This is
I have an array, and I need to index it like so:
z[...,x,:]
How can I write code which will index z, as above, when x is not known
ahead of time. For that matter, the particular dimension I am querying
is not known either. In case this is still confusing, I am looking
for the NumPy way to
On Fri, Aug 7, 2009 at 11:54 AM, T Jtjhn...@gmail.com wrote:
The reduce function of ufunc of a vectorized function doesn't seem to
respect the dtype.
def a(x,y): return x+y
b = vectorize(a)
c = array([1,2])
b(c, c) # use once to populate b.ufunc
d = b.ufunc.reduce(c)
c.dtype, type(d)
On Sat, Aug 8, 2009 at 8:54 PM, Neil Martinsen-Burrelln...@wartburg.edu wrote:
The ellipsis is a built-in python constant called Ellipsis. The colon
is a slice object, again a python built-in, called with None as an
argument. So, z[...,2,:] == z[Ellipsis,2,slice(None)].
Very helpful!
z = array([1,2,3,4])
z[[1]]
array([1])
z[(1,)]
1
I'm just curious: What is the motivation for this differing behavior?
Is it a necessary consequence of, for example, the following:
z[z3]
array([1,2])
___
NumPy-Discussion mailing list
I was wondering why vectorize doesn't make the ufunc available at the
topmost level
def a(x,y): return x + y
b = vectorize(a)
b.reduce
Instead, the ufunc is stored at b.ufunc.
Also, b.ufunc.reduce() doesn't seem to exist until I *use* the
vectorized function at least once. Can this be
The reduce function of ufunc of a vectorized function doesn't seem to
respect the dtype.
def a(x,y): return x+y
b = vectorize(a)
c = array([1,2])
b(c, c) # use once to populate b.ufunc
d = b.ufunc.reduce(c)
c.dtype, type(d)
dtype('int32'), type 'int'
c = array([[1,2,3],[4,5,6]])
Hi, the documentation for dot says that a value error is raised if:
If the last dimension of a is not the same size as the
second-to-last dimension of b.
(http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.htm)
This doesn't appear to be the case:
a = array([[1,2],[3,4]])
b =
Oh. b.shape = (2,). So I suppose the second to last dimension is, in
fact, the last dimension...and 2 == 2.
nvm
On Fri, Aug 7, 2009 at 2:19 PM, T Jtjhn...@gmail.com wrote:
Hi, the documentation for dot says that a value error is raised if:
If the last dimension of a is not the same size
Hi,
Is there a good way to perform dot on an arbitrary list of arrays
which avoids using a loop? Here is what I'd like to avoid:
# m1, m2, m3 are arrays
out = np.(m1.shape[0])
prod = [m1, m2, m3, m1, m2, m3, m3, m2]
for m in prod:
... out = np.dot(out, m)
...
I was hoping for something
On Tue, Jan 20, 2009 at 6:57 PM, Neal Becker ndbeck...@gmail.com wrote:
It seems the big chunks of time are used in data conversion between numpy
and my own vectors classes. Mine are wrappers around boost::ublas. The
conversion must be falling back on a very inefficient method since there is
On Mon, Nov 10, 2008 at 4:05 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
I added log2 and exp2. I still need to do the complex versions. I think
logaddexp2 should go in also to compliment these.
Same here, especially since logaddexp is present. Or was the idea
that both logexpadd and
On Thu, Nov 6, 2008 at 3:01 PM, T J [EMAIL PROTECTED] wrote:
On Thu, Nov 6, 2008 at 2:36 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
I could add exp2, log2, and logaddexp2 pretty easily. Almost too easily, I
don't want to clutter up numpy with a lot of functions. However
On Fri, Nov 7, 2008 at 2:16 AM, David Cournapeau
[EMAIL PROTECTED] wrote:
And you have no site.cfg at all ?
Wow. I was too focused on the current directory and didn't realize I
had an old site.cfg in ~/.
Two points:
1) Others (myself included) might catch such silliness sooner if the
On Fri, Nov 7, 2008 at 1:58 AM, T J [EMAIL PROTECTED] wrote:
That the fortran wrappers were compiled using g77 is also apparent via
what is printed out during setup when ATLAS is detected:
gcc -pthread _configtest.o -L/usr/lib/atlas -llapack -lblas -o _configtest
ATLAS version 3.6.0 built
On Fri, Nov 7, 2008 at 1:48 AM, David Cournapeau
[EMAIL PROTECTED] wrote:
It works for me on Intrepid (64 bits). Did you install
libatlas3gf-base-dev ? (the names changed in intrepid).
I fear I am overlooking something obvious.
$ sudo aptitude search libatlas
p libatlas-3dnow-dev
On Fri, Nov 7, 2008 at 1:26 AM, David Cournapeau
[EMAIL PROTECTED] wrote:
David Cournapeau wrote:
Ok, I took a brief look at this: I forgot that Ubuntu and Debian added
an aditional library suffix to libraries depending on gfortran ABI. I
added support for this in numpy.distutils - which was
On Thu, Nov 6, 2008 at 1:48 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
What is your particular interest in these other bases and why would
they be better than working in base e and converting at the end?
The interest is in information theory, where quantities are
(standardly) represented in
On Thu, Nov 6, 2008 at 2:36 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
I could add exp2, log2, and logaddexp2 pretty easily. Almost too easily, I
don't want to clutter up numpy with a lot of functions. However, if there is
a community for these functions I will put them in.
I worry about
On Wed, Nov 5, 2008 at 12:00 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
Hmm I wonder if the base function should be renamed logaddexp, then
logsumexp would apply to the reduce method. Thoughts?
As David mentioned, logsumexp is probably the traditional name, but as
the earlier link
On Tue, Nov 4, 2008 at 9:37 PM, Anne Archibald
[EMAIL PROTECTED] wrote:
2008/11/5 Charles R Harris [EMAIL PROTECTED]:
Hi All,
I'm thinking of adding some new ufuncs. Some possibilities are
expadd(a,b) = exp(a) + exp(b) -- For numbers stored as logs:
Surely this should be
Numpy doesn't seem to be finding my atlas install. Have I done
something wrong or misunderstood?
$ cd /usr/lib
$ ls libatlas*
libatlas.a libatlas.so libatlas.so.3gf libatlas.so.3gf.0
$ ls libf77*
libf77blas.a libf77blas.so libf77blas.so.3gf libf77blas.so.3gf.0
$ ls libcblas*
libcblas.a
On Mon, Nov 3, 2008 at 10:46 AM, T J [EMAIL PROTECTED] wrote:
Since these are all in the standard locations, I am building without a
site.cfg. Here is the beginning info:
Apparently, this is not enough. Only if I also set the ATLAS
environment variable am I able to get this working
Sorrywrong list.
On Sun, Nov 2, 2008 at 11:34 AM, T J [EMAIL PROTECTED] wrote:
Hi,
I'm having trouble installing PyUblas 0.93.1 (same problems from the
current git repository). I'm in ubuntu 8.04 with standard boost
packages (1.34.1, I believe). Do you have any suggestions?
Thanks
On Mon, Oct 20, 2008 at 2:20 AM, A. G. wrote:
one well attached to 2 or more units). Is there any simple way in
numpy (scipy?) in which I can get the number of possible combinations
of wells attached to the different 3 units, without repetitions? For
example, I could have all 60 wells attached
On Tue, Oct 14, 2008 at 1:02 AM, Sebastian Haase [EMAIL PROTECTED] wrote:
b) I don't want to use Python / numpy API code in the C functions I'm
wrapping - so I limit myself to input arrays! Since array memory
does not distinguish between input or output (assuming there is no
copying needed
Hi,
I'm new to using SWIG and my reading of numpy_swig.pdf tells me that
the following typemap does not exist:
(int* ARGOUT_ARRAY2, int DIM1, int DIM2)
What is the recommended way to output a 2D array? It seems like I should use:
(int* ARGOUT_ARRAY1, int DIM1)
and then provide a python
Hi,
I'm getting a couple of test failures with Python 2.6, Numpy 1.2.0, Nose 0.10.4:
nose version 0.10.4
Hi,
For precision reasons, I almost always need to work with arrays whose
elements are log values. My thought was that it would be really neat
to have a 'logarray' class implemented in C or as a subclass of the
standard array class. Here is a sample of how I'd like to work with
these objects:
On Thu, May 8, 2008 at 12:26 AM, T J [EMAIL PROTECTED] wrote:
x = array([-2,-2,-3], base=2)
y = array([-1,-2,-inf], base=2)
z = x + y
z
array([-0.415037499279, -1.0, -3])
z = x * y
z
array([-3, -4, -inf])
z[:2].sum()
-2.41503749928
Whoops
s/array/logarray
On 5/8/08, Anne Archibald [EMAIL PROTECTED] wrote:
Is logarray really the way to handle it, though? it seems like you
could probably get away with providing a logsum ufunc that did the
right thing. I mean, what operations does one want to do on logarrays?
add - logsum
subtract - ?
multiply
79 matches
Mail list logo