Francesc Altet wrote:
Ei, numexpr seems to be back, wow! :-D
A Dimarts 13 Juny 2006 18:56, Tim Hochberg va escriure:
I've finally got around to looking at numexpr again. Specifically, I'm
looking at Francesc Altet's numexpr-0.2, with the idea of harmonizing
the two versions. Let me go
Francesc Altet wrote:
A Dimarts 13 Juny 2006 20:46, Tim Hochberg va escriure:
Uh, I'm afraid that yes. In PyTables, int64, while being a bit bizarre for
some users (specially in 32-bit platforms), is a type with the same rights
than the others and we would like to give support
Ivan Vilata i Balaguer wrote:
En/na Tim Hochberg ha escrit::
Francesc Altet wrote:
[...]
Uh, I'm afraid that yes. In PyTables, int64, while being a bit bizarre for
some users (specially in 32-bit platforms), is a type with the same rights
than the others and we would like to give
JJ wrote:
Hello. I am a matlab user learning the syntax of
numpy. Id like to check that I am not missing some
easy steps on column selection and concatenation. The
example task is to determine if two columns selected
out of an array are of full rank (rank 2). Lets say
we have an array d that
I don't have anything constructive to add at the moment, so I'll just
throw out an unelucidated opinion:
+1 for longish names.
-1 for two sets of names.
-tim
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
Sebastian Beca wrote:
Hi,
I'm working with NumPy/SciPy on some algorithms and i've run into some
important speed differences wrt Matlab 7. I've narrowed the main speed
problem down to the operation of finding the euclidean distance
between two matrices that share one dimension rank (dist in
Christopher Barker wrote:
Bruce Southey wrote:
Please run the exact same code in Matlab that you are running in
NumPy. Many of Matlab functions are very highly optimized so these are
provided as binary functions. I think that you are running into this
so you are not doing the correct
Alan G Isaac wrote:
On Sun, 18 Jun 2006, Sebastian Beca apparently wrote:
def dist():
d = zeros([N, C], dtype=float)
if N C: for i in range(N):
xy = A[i] - B d[i,:] = sqrt(sum(xy**2, axis=1))
return d
else:
for j in range(C):
xy = A - B[j] d[:,j] = sqrt(sum(xy**2, axis=1))
return d
Johannes Loehnert wrote:
Hi,
## Output:
numpy.__version__: 0.9.8
y: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]
y**2: [ 0. 1. 4. 9. 16. 25. 36. 49. 64. 81.]
z: [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]
z**2: [ 0.e+00 1.e+00 1.6000e+01 8.1000e+01
Regarding choice of float or int for default:
The number one priority for numpy should be to unify the three disparate
Python numeric packages. Whatever choice of defaults facilitates that is
what I support.
Personally, given no other constraints, I would probably just get rid of
the
Bill Baxter wrote:
Here's another possible now or never change:
fix rand(), eye(), ones(), zeros(), and empty() to ALL take either a
tuple argument or plain list.
Since a tuple seems to work fine as an argument I imagine you means
something else.
I know this has been discussed before, but I
Travis Oliphant wrote:
Hmmm. One thing that bothers me is that it seems that those
arguing *against* this behavior are relatively long-time users of Python
while those arguing *for* it are from what I can tell somewhat new to
Python/NumPy. I'm not sure what this means.
Is the
Sven Schreiber wrote:
Travis Oliphant schrieb:
Bill Baxter wrote:
So in short my proposal is to:
-- make a.T a property of array that returns a.swapaxes(-2,-1),
-- make a.H a property of array that returns
a.conjugate().swapaxes(-2,-1)
and maybe
-- make a.M a property of
Tim Hochberg wrote:
Sven Schreiber wrote:
Travis Oliphant schrieb:
Bill Baxter wrote:
So in short my proposal is to:
-- make a.T a property of array that returns a.swapaxes(-2,-1),
-- make a.H a property of array that returns
a.conjugate().swapaxes(-2,-1
Bill Baxter wrote:
On 7/6/06, *Tim Hochberg* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
-) Being able to distinguish between row and column vectors; I guess
this is just not possible with arrays...
Why can't you distinguish between them the same way
Sven Schreiber wrote:
Tim Hochberg schrieb:
-) .I for inverse; actually, why not add that to arrays as well as
syntactic sugar?
Because it encourages people to do the wrong thing numerically speaking?
My understanding is that one almost never wants to compute
Sasha wrote:
On 7/6/06, Bill Baxter [EMAIL PROTECTED] wrote:
On 7/7/06, Sasha [EMAIL PROTECTED] wrote:
... I think it's much
more common to want to swap just two axes, and the last two seem a logical
choice since a) in the default C-ordering they're the closest together in
memory and b)
Sasha wrote:
On 7/6/06, Robert Kern [EMAIL PROTECTED] wrote:
...
I don't think that just because arrays are often used for linear algebra that
linear algebra assumptions should be built in to the core array type.
In addition, transpose is a (rank-2) array or matrix operation and
Sasha wrote:
On 7/6/06, Bill Baxter [EMAIL PROTECTED] wrote:
...
Yep, like Tim said. The usage is say a N sets of basis vectors. Each set
of basis vectors is a matrix.
This brings up a feature that I really miss from numpy: an ability to do
array([f(x) for x in a])
Please
Bill Baxter wrote:
On 7/7/06, *Tim Hochberg* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
I'd caution here though that the H is another thing that's going to
encourage people to write code that's less accurate and slower than it
needs to be. Consider the simple equation
So, I put together of a prototype dot function
dot(
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
Tim Hochberg wrote:
So, I put together of a prototype dot function
dot(
Ooops! This wasn't supposed to go out yet, sorry.
More later.
-tim
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job
So I put together a prototype for an extended dot function that takes
multiple arguments. This allows multiple dots to be computed in a single
call thusly:
dot(dot(dot(a, b), c), d) = dotn(a, b, c, d)
On Bill Baxter's suggestion, dotn attempts to do the dots in an order
that minimizes
Nils Wagner wrote:
Hi all,
how can I increase the number of digits in the output of str(.) ?
You can't as far as I know. For floats, you can use %.nf. For example:
%.13f % 493.4802200544680
-tim
lam**2
493.48022005446808
str(lam**2)
'493.480220054'
Travis Oliphant wrote:
Tom Denniston wrote:
The following works on a float array but not an object array. It
gives a very strange error message.
(Pdb) numpy.log(numpy.array([19155613843.7], dtype=object))
*** AttributeError: 'float' object has no attribute 'log'
This is
Norbert Nemec wrote:
unique1d is based on ediff1d, so it really calculates many differences
and compares those to 0.0
This is inefficient, even though this is hidden by the general
inefficiency of Python (It might be the reason for the two milliseconds,
though)
What is more: subtraction
Tim Hochberg wrote:
Norbert Nemec wrote:
unique1d is based on ediff1d, so it really calculates many differences
and compares those to 0.0
This is inefficient, even though this is hidden by the general
inefficiency of Python (It might be the reason for the two milliseconds,
though
Nick Fotopoulos wrote:
On Jul 13, 2006, at 10:17 PM, Tim Hochberg wrote:
Nick Fotopoulos wrote:
Dear all,
I often make use of numpy.vectorize to make programs read more
like the physics equations I write on paper. numpy.vectorize is
basically a wrapper for numpy.frompyfunc
Eric Emsellem wrote:
thanks for the tips. (indeed your add.reduce is correct: I just wrote
this down too quickly, in the script I have a sum included).
And yes you are right for the memory issue, so I may just keep the loop
in and try to make it work on a fast PC...(or use parallel processes)
[EMAIL PROTECTED] wrote:
Hi users,
i have some problem in Numpy with indexing speed for array to tensor
matrix transport.
With 20 cycles it's 9sec ! (without sort(eigvals() - functions)
Many thanks
f.
The big problem you have here is that you are operating on your matrices
The recent message by Ferenc.Pintye (how does one pronounce that BTW)
reminded me of something I've been meaning to discuss: I think we can do
a better job dealing with stacked matrices. By stacked matrices I mean 3
(or more) dimensional arrays where the last two dimensions are
considered to
Fernando Perez wrote:
On 7/18/06, Tim Hochberg [EMAIL PROTECTED] wrote:
Eric Emsellem wrote:
thanks for the tips. (indeed your add.reduce is correct: I just
wrote
this down too quickly, in the script I have a sum included).
And yes you are right for the memory issue, so I may just keep
Martin Spacek wrote:
Hello,
I'm a bit ignorant of optimization in numpy.
I have a movie with 65535 32x32 frames stored in a 3D array of uint8
with shape (65535, 32, 32). I load it from an open file f like this:
import numpy as np
data = np.fromfile(f, np.uint8, count=65535*32*32)
Martin Spacek wrote:
Tim Hochberg wrote:
Here's an approach (mean_accumulate) that avoids making any copies of
the data. It runs almost 4x as fast as your approach (called baseline
here) on my box. Perhaps this will be useful:
--snip--
def mean_accumulate(data, indices
Tom Denniston wrote:
I was thinking about this in the context of Giudo's comments at scipy
2006 that much of the language is moving away from lists toward
iterators. He gave the keys of a dict as an example.
Numpy treats iterators, generators, etc as 0x0 PyObjects rather than
lazy
Keith Goodman wrote:
I have a very long list that contains many repeated elements. The
elements of the list can be either all numbers, or all strings, or all
dates [datetime.date].
I want to convert the list into a matrix where each unique element of
the list is assigned a consecutive
Tim Hochberg wrote:
Keith Goodman wrote:
I have a very long list that contains many repeated elements. The
elements of the list can be either all numbers, or all strings, or all
dates [datetime.date].
I want to convert the list into a matrix where each unique element of
the list
-0.5 from me if what we're talking about here is having mutating methods
return self rather than None. Chaining stuff is pretty, but having
methods that mutate self and return self looks like a source of elusive
bugs to me.
-tim
Rudolph van der Merwe wrote:
This definitely gets my vote as
Charles R Harris wrote:
Hi,
On 8/29/06, *Tim Hochberg* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
-0.5 from me if what we're talking about here is having mutating
methods
return self rather than None. Chaining stuff is pretty, but having
methods that mutate self
David M. Cooke wrote:
On Tue, 29 Aug 2006 14:03:39 -0700
Tim Hochberg [EMAIL PROTECTED] wrote:
Of these, clip, conjugate and round support an 'out' argument like that
supported by ufunces; byteswap has a boolean argument telling it
whether to perform operations in place; and sort
Torgil Svensson wrote:
return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int))
Is it possible for fromiter to take an optional shape (or count)
argument in addition to the dtype argument?
Yes. fromiter(iterable, dtype, count) works.
If both is given it could
preallocate memory
Christopher Barker wrote:
Fernando Perez wrote:
In [8]: N.array(3).shape
Out[8]: ()
In [11]: N.array([]).shape
Out[11]: (0,)
I guess my only remaining question is: what is the difference between
outputs #8 and #11 above? Is an empty shape tuple == array scalar,
with this?
d2_dt=numpy.dtype(4f8)
d2_iter=itertools.cycle([(1.0,numpy.nan,-1e10,14.0)])
numpy.fromiter(d2_iter,d2_dt,10)
Traceback (most recent call last):
File stdin, line 1, in ?
TypeError: a float is required
numpy.__version__
'1.0b4'
//Torgil
On 8/30/06, Tim Hochberg
Keith Goodman wrote:
In what order would you like argsort to sort the values -inf, nan, inf?
Ideally, -inf should sort first, inf should sort last and nan should
raise an exception if present.
-tim
In numpy 1.0b1 nan is always left where it began:
EXAMPLE 1
x
matrix([[
A. M. Archibald wrote:
On 19/09/06, Tim Hochberg [EMAIL PROTECTED] wrote:
Keith Goodman wrote:
In what order would you like argsort to sort the values -inf, nan, inf?
Ideally, -inf should sort first, inf should sort last and nan should
raise an exception if present.
-tim
Charles R Harris wrote:
On 9/19/06, *Tim Hochberg* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
A. M. Archibald wrote:
On 19/09/06, Tim Hochberg [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Keith Goodman wrote:
In what order would you like
A. M. Archibald wrote:
On 19/09/06, Charles R Harris [EMAIL PROTECTED] wrote:
If this sort of thing can cause unexpected errors I wonder if it would be
worth it to have a global debugging flag that essentially causes isnan to
be called before any function applications.
That
A. M. Archibald wrote:
On 19/09/06, Tim Hochberg [EMAIL PROTECTED] wrote:
I'm not sure where the breakpoint is, but I was seeing failures for all
three sort types with N as high as 1. I suspect that they're all
broken in the presence of NaNs. I further suspect you'd need some
Travis Oliphant wrote:
Tim Hochberg wrote:
A. M. Archibald wrote:
On 19/09/06, Tim Hochberg [EMAIL PROTECTED] wrote:
I'm not sure where the breakpoint is, but I was seeing failures for all
three sort types with N as high as 1. I suspect that they're all
Charles R Harris wrote:
On 9/19/06, *A. M. Archibald* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
On 19/09/06, Charles R Harris [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
For floats we could use something like:
lessthan(a,b) := a b
Robert Kern wrote:
Sebastian Haase wrote:
Robert Kern wrote:
Sebastian Haase wrote:
I know that having too much knowledge of the details often makes one
forget what the newcomers will do and expect.
Please be more careful with such accusations. Repeated
Robert Kern wrote:
David M. Cooke wrote:
On Wed, Sep 20, 2006 at 03:01:18AM -0500, Robert Kern wrote:
Let me offer a third path: the algorithms used for .mean() and .var() are
substandard. There are much better incremental algorithms that entirely
avoid
the need to accumulate
[Sorry, this version should have less munged formatting since I clipped
the comments. Oh, and the Kahan sum algorithm was grabbed from
wikipedia, not mathworld]
Tim Hochberg wrote:
Robert Kern wrote:
David M. Cooke wrote:
On Wed, Sep 20, 2006 at 03:01:18AM -0500, Robert Kern
Robert Kern wrote:
David M. Cooke wrote:
On Wed, Sep 20, 2006 at 03:01:18AM -0500, Robert Kern wrote:
Let me offer a third path: the algorithms used for .mean() and .var() are
substandard. There are much better incremental algorithms that entirely
avoid
the need to accumulate
David M. Cooke wrote:
On Thu, 21 Sep 2006 11:34:42 -0700
Tim Hochberg [EMAIL PROTECTED] wrote:
Tim Hochberg wrote:
Robert Kern wrote:
David M. Cooke wrote:
On Wed, Sep 20, 2006 at 03:01:18AM -0500, Robert Kern wrote:
Let
Sebastian Haase wrote:
On Thursday 21 September 2006 15:28, Tim Hochberg wrote:
David M. Cooke wrote:
On Thu, 21 Sep 2006 11:34:42 -0700
Tim Hochberg [EMAIL PROTECTED] wrote:
Tim Hochberg wrote:
Robert Kern wrote:
David M. Cooke wrote
troubles with Numpy-1.0rc1 and I didn't find any help in the provided
setup.py. So, does someone can tell me how to do it?
I don't use VisualStudio2003 on Windows to compile NumPy (I use mingw).
Tim Hochberg once used a microsoft compiler to compile a previous
version of NumPy
Stefan van der Walt wrote:
Hi all,
Currently, the power function returns '0' for negative powers of
integers:
In [1]: N.power(3,-2)
Out[1]: 0
(or, more confusingly)
In [1]: N.power(a,b)
Out[1]: 0
which is almost certainly not the answer you want. Two possible
solutions may be to
Francesc Altet wrote:
Hello,
Is the next a bug a feature?
In [102]: f4=numpy.ndarray(buffer=a\x00b*4, dtype=f4, shape=3)
In [103]: f4
Out[103]: array([ 2.60561966e+20, 8.94319890e-39, 5.92050103e+20],
dtype=float32)
In [104]: f4[2] = 2
Travis Oliphant wrote:
Tim Hochberg wrote:
Francesc Altet wrote:
It's not that the it's being built from ndarray, it's that the buffer
that you are passing it is read only.
This is correct.
In fact, I'd argue that allowing
the writeable flag to be set to True
Travis Oliphant wrote:
Albert Strasheim wrote:
[1] 12.97% of function time
[2] 8.65% of functiont ime
[3] 62.14% of function time
If statistics from elsewhere in the code would be helpful, let me
know,
and
I'll see if I can convince Quantify to cough
Francesc Altet wrote:
Hi,
I thought that numpy.isscalar was a good way of distinguising a numpy scalar
from a python scalar, but it seems not:
numpy.isscalar(numpy.string_('3'))
True
numpy.isscalar('3')
True
Is there an easy (and fast, if possible) way to
Kenny Ortmann wrote:
excuse my laziness for not looking this up, I googled it but could not find
a solution.
matlab has a
isreal(array)
where if the array is full of real numbers the value returned is 1.
I'm translating matlab code and ran across
if ~isreal(array)
array = abs(array)
Kenny Ortmann wrote:
There may be a better way, but::
alltrue(isreal(x))
Would work. As would:
not sometrue(x.imag)
In the above test you are already negating the test, so you could just
drop the not.
and if so is
there a way to extract the(a + ib) because the absolute
. That's to limit the number of bytecodes that
we need to support and keep the switch statement at a manageable size.
However, it doesn't look like that ever got implemented, so the answer
is probably no.
-tim
-Sebastian
On Wednesday 04 October 2006 09:32, Tim Hochberg wrote:
Ivan Vilata i
David M. Cooke wrote:
On Wed, 04 Oct 2006 10:19:08 -0700
Tim Hochberg [EMAIL PROTECTED] wrote:
Ivan Vilata i Balaguer wrote:
It seemed that discontiguous arrays worked OK in Numexpr since r1977 or
so, but I have come across some alignment or striding problems which can
be seen
Tim Hochberg wrote:
David M. Cooke wrote:
On Wed, 04 Oct 2006 10:19:08 -0700
Tim Hochberg [EMAIL PROTECTED] wrote:
Ivan Vilata i Balaguer wrote:
It seemed that discontiguous arrays worked OK in Numexpr since
r1977 or
so, but I have come across some alignment or striding problems
Bill Baxter wrote:
[There seem to have been some gmail delivery problems that prevented
my previous mail on this subject from being delivered]
I've proposed that we fix repmat handle arbitrary dimensions before 1.0.
http://projects.scipy.org/scipy/numpy/ticket/292
I don't think this is
Travis Oliphant wrote:
Sven Schreiber wrote:
This is user adjustable. You change the error mode to raise on
'invalid' instead of pass silently which is now the default.
-Travis
Could you please explain how this adjustment is done, or point to the
relevant
Travis Oliphant wrote:
Tim Hochberg wrote:
With python 2.5 out now, perhaps it's time to come up with a with
statement context manager. Something like:
from __future__ import with_statement
import numpy
class errstate(object):
def __init__(self, **kwargs
Mark Bakker wrote:
My vote is for consistency in numpy.
But it is unclear what consistency is.
What is truly confusing for newbie Python users (and a source for
error even after 5 years of Python programming) is that
2/3
0
I recommend that you slap from __future__ import division into
Travis Oliphant wrote:
Travis Oliphant wrote:
Now, it would be possible to give ufuncs a dtype keyword argument that
allowed you to specify which underlying loop was to be used for the
calculation. That way you wouldn't have to convert inputs to complex
numbers before calling the
Travis Oliphant wrote:
Personally I think that the default error mode should be tightened
up.
Then people would only see these sort of things if they really care
about them. Using Python 2.5 and the errstate class I posted earlier:
# This is what I like for the
Travis Oliphant wrote:
I made some fixes to the asbuffer code which let me feel better about
exposing it in NumPy (where it is now named int_asbuffer).
This code takes a Python integer and a size and returns a buffer object
that points to that memory. A little test is performed by trying
Is any one else seeing the multiarray tests all get skipped because of
an import error when compiling under python 2.5. Everything else seems
to work and all the tests go fine under 2.4.
-tim
-
Using Tomcat but need to
Gerard Vermeulen wrote:
On Thu, 12 Oct 2006 11:04:55 -0700
Tim Hochberg [EMAIL PROTECTED] wrote:
Is any one else seeing the multiarray tests all get skipped because of
an import error when compiling under python 2.5. Everything else seems
to work and all the tests go fine under 2.4
I just checked in a couple of changes to SVN. I was going to check in
errstate, but it looks like Travis beat me to it, so I contented myself
with adding a docstring and some tests. These tests are only run under
2.5; things seem to work fine here, but if someone on a Linux box whose
running
Albert Strasheim wrote:
Hello all
-Original Message-
From: [EMAIL PROTECTED] [mailto:numpy-
[EMAIL PROTECTED] On Behalf Of Tim Hochberg
Sent: 12 October 2006 21:24
To: numpy-discussion
Subject: [Numpy-discussion] More SVN testing
I just checked in a couple of changes to SVN
Bill Baxter wrote:
On 10/12/06, Stefan van der Walt [EMAIL PROTECTED] wrote:
On Thu, Oct 12, 2006 at 08:58:21AM -0500, Greg Willden wrote:
On 10/11/06, Bill Baxter [EMAIL PROTECTED] wrote:
I tried to explain the argument at
http://www.scipy.org/NegativeSquareRoot
Greg Willden wrote:
On 10/13/06, *A. M. Archibald* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
At this point you might as well use a polynomial class that can
accomodate a variety of bases for the space of polynomials - X^n,
(X-a)^n, orthogonal polynomials (translated and
Charles R Harris wrote:
On 10/13/06, *A. M. Archibald* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
On 12/10/06, Charles R Harris [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Hi all,
I note that polyfit looks like it should work for single and
Charles R Harris wrote:
On 10/14/06, *A. M. Archibald* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
[SNIP]
Hmmm, I wonder if we have a dictionary of precisions indexed by dtype
somewhere?
Here's some code I stole from somewhere for computing EPS. It would easy
enough to generate
Ivan Vilata i Balaguer wrote:
Looking at the ``ophelper()`` decorator in the ``expressions`` module of
Numexpr, I see the following code is used to check/replace arguments of
operators::
for i, x in enumerate(args):
if isConstant(x):
args[i] = x = ConstantNode(x)
Ivan Vilata i Balaguer wrote:
En/na Tim Hochberg ha escrit::
Ivan Vilata i Balaguer wrote:
for i, x in enumerate(args):
if isConstant(x):
args[i] = ConstantNode(x)
elif not isinstance(x, ExpressionNode):
raise TypeError( unsupported
One thing that may be confusing the issue is that, as I understand it,
FORTRAN and CONTIGUOUS together represent three states which I'll call
FORTRAN_ORDER, C_ORDER and DISCONTIGUOUS. I periodically wonder if it
would be valuable to have a way to query the order directly: the result
would be
Charles R Harris wrote:
[SNIP]
I'm not talking about the keyword in the ravel call, I'm talking about
the flag in a. The question is: do we *need* a fortran flag. I am
argueing not, because the only need is for fortran contiguous arrays
to pass to fortran function, or translation from
My $0.02:
If histogram is going to get a makeover, particularly one that makes it
more complex than at present, it should probably be moved to SciPy.
Failing that, it should be moved to a submodule of numpy with similar
statistical tools. Preferably with consistent interfaces for all of the
Rudolph van der Merwe wrote:
I get the following error with RC3 on a RHE Linux box:
Python 2.4.3 (#4, Mar 31 2006, 12:12:43)
[GCC 3.4.5 20051201 (Red Hat 3.4.5-2)] on linux2
Type help, copyright, credits or license for more information.
import numpy
numpy.__version__
'1.0rc3'
Travis Oliphant wrote:
Tim Hochberg wrote:
Rudolph van der Merwe wrote:
I get the following error with RC3 on a RHE Linux box:
Python 2.4.3 (#4, Mar 31 2006, 12:12:43)
[GCC 3.4.5 20051201 (Red Hat 3.4.5-2)] on linux2
Type help, copyright, credits or license for more
Travis Oliphant wrote:
Tim Hochberg wrote:
Rudolph van der Merwe wrote:
I get the following error with RC3 on a RHE Linux box:
Python 2.4.3 (#4, Mar 31 2006, 12:12:43)
[GCC 3.4.5 20051201 (Red Hat 3.4.5-2)] on linux2
Type help, copyright, credits or license for more
Travis Oliphant wrote:
Tim Hochberg wrote:
Travis Oliphant wrote:
Tim Hochberg wrote:
Rudolph van der Merwe wrote:
I get the following error with RC3 on a RHE Linux box:
Python 2.4.3 (#4, Mar 31 2006, 12:12:43)
[GCC 3.4.5 20051201
Robert Kern wrote:
David Huard wrote:
Hi,
Is there an elegant way to reduce an array but conserve the reduced
dimension ?
Currently,
a = random.random((10,10,10))
a.sum(1).shape
(10,10)
but i'd like to keep (10,1,10) so I can do a/a.sum(1) directly.
def
[CHOP]
OK, I've checked in changes to suppress all the warnings in the test
suite. I tried to be as targeted as possible so that any regressions
from the current state in terms of warnings should show up.
I suspect that there may be some issues with regards to masked arrays
issuing spurious
Francesc Altet wrote:
A Divendres 20 Octubre 2006 11:42, Sebastien Bardeau va escriure:
[snip]
I can understand that numpy.scalars do not provide inplace operations
(like Python standard scalars, they are immutable), so I'd like to use
0-d Numpy.ndarrays. But:
d =
Brian Granger wrote:
Hi,
i am running numpy on aix compiling with xlc. Revision 1.0rc2 works
fine and passes all tests. But 1.0rc3 and more recent give the
following on import:
Warning: invalid value encountered in multiply
Warning: invalid value encountered in multiply
Warning: invalid
Brian Granger wrote:
Also, when I use seterr(all='ignore') the the tests fail:
==
FAIL: Ticket #112
--
Traceback (most recent call last):
File
Brian Granger wrote:
When I set seterr(all='warn') I see the following:
In [1]: import numpy
/usr/common/homes/g/granger/usr/local/lib/python/numpy/lib/ufunclike.py:46:
RuntimeWarning: invalid value encountered in log
_log2 = umath.log(2)
be a pain in the neck
unless you can find some place to steal the relevant code from.
-tim
thanks
Brian
On 10/20/06, Tim Hochberg [EMAIL PROTECTED] wrote:
Brian Granger wrote:
Also, when I use seterr(all='ignore') the the tests fail
Albert Strasheim wrote:
Hello all
I'm trying to generate random 32-bit integers. None of the following seem to
do the trick with NumPy 1.0.dev3383:
In [32]: N.random.randint(-2**31, 2**31-1)
ValueError: low = high
In [43]: N.random.random_integers(-2**31, 2**31-1)
OverflowError: long int
Travis Oliphant wrote:
Robert Kern wrote:
Travis Oliphant wrote:
It looks like 1.0-x is doing the right thing.
The problem is 1.0*x for matrices is going to float64. For arrays it
returns float32 just like the 1.0-x
Why is this the right thing? Python floats
1 - 100 of 126 matches
Mail list logo