Hi,
I've started using complex192 for some calculations and came across two things
that seems to be bugs:
In [1]: sqrt(array([-1.0],dtype = complex192))
Out[1]: array([0.0+-6.1646549e-4703j], dtype=complex192)
In [2]: sqrt(array([-1.0],dtype = complex128))
Out[2]: array([ 0.+1.j])
In [3]: x
It doesn't work on Windows, either.
In [35]: numpy.sqrt(numpy.array([-1.0], dtype=numpy.complex192))
Out[35]: array([0.0+2.9996087e-305j], dtype=complex192)
In [36]: numpy.sqrt(numpy.array([-1.0], dtype=numpy.complex128))
Out[36]: array([ 0.+1.j])
In [37]: numpy.__version__
Out[37]:
i,
I managed to reproduce your bugs on a FC6 box :
import numpy as n
n.sqrt(n.array([-1.0],dtype = n.complex192))
array([0.0+9.2747134e+492j], dtype=complex192)
n.sqrt(n.array([-1.0],dtype = n.complex128))
array([ 0.+1.j])
x=n.array([0.0+0.0j, 1.0+0.0j], dtype=n.complex192)
x
Thanks for the fast replies, now I know it's not my machine that gives me
trouble.
In the meantime I tested a couple of other functions. It seems that all of them
fail with complex192.
/Matts
In [19]: x192 = arange(0,2,0.5,dtype = complex192)*pi
In [20]: x128 = arange(0,2,0.5,dtype =
Charles R Harris wrote:
On Jan 7, 2008 8:47 AM, Ryan May [EMAIL PROTECTED] mailto:[EMAIL
PROTECTED] wrote:
Stuart Brorson wrote:
I realize NumPy != Matlab, but I'd wager that most users would think
that this is the natural behavior..
Well, that behavior won't
For large arrays, it makes since to do automatic
conversions, as is also the case in functions taking output arrays,
because the typecast can be pushed down into C where it is time and
space efficient, whereas explicitly converting the array uses up
temporary space. However, I can imagine an
Hi All,
I'm thinking that one way to make the automatic type conversion a bit safer
to use would be to add a CASTABLE flag to arrays. Then we could write
something like
a[...] = typecast(b)
where typecast returns a view of b with the CASTABLE flag set so that the
assignment operator can check
Bruce Sherwood wrote:
Okay, I've implemented the scheme below that was proposed by Scott
Daniels on the VPython mailing list, and it solves my problem. It's also
much faster than using numpy directly: even with the def and if
overhead: sqrt(scalar) is over 3 times faster than the numpy
Charles R Harris wrote:
Hi All,
I'm thinking that one way to make the automatic type conversion a bit
safer to use would be to add a CASTABLE flag to arrays. Then we could
write something like
a[...] = typecast(b)
where typecast returns a view of b with the CASTABLE flag set so that
On Mon, 7 Jan 2008 19:42:40 +0100
Francesc Altet [EMAIL PROTECTED] wrote:
A Monday 07 January 2008, Nils Wagner escrigué:
numpy.sqrt(numpy.array([-1.0],
dtype=numpy.complex192))
Traceback (most recent call last):
File stdin, line 1, in module
AttributeError: 'module' object has no
A Monday 07 January 2008, Nils Wagner escrigué:
numpy.sqrt(numpy.array([-1.0], dtype=numpy.complex192))
Traceback (most recent call last):
File stdin, line 1, in module
AttributeError: 'module' object has no attribute
'complex192'
numpy.__version__
'1.0.5.dev4673'
It seems like you
On 07/01/2008, Charles R Harris [EMAIL PROTECTED] wrote:
One place where Numpy differs from MatLab is the way memory is handled.
MatLab is always generating new arrays, so for efficiency it is worth
preallocating arrays and then filling in the parts. This is not the case in
Numpy where lists
On Jan 7, 2008 12:00 PM, Travis E. Oliphant [EMAIL PROTECTED] wrote:
Charles R Harris wrote:
Hi All,
I'm thinking that one way to make the automatic type conversion a bit
safer to use would be to add a CASTABLE flag to arrays. Then we could
write something like
a[...] = typecast(b)
On Jan 7, 2008 12:08 PM, Anne Archibald [EMAIL PROTECTED] wrote:
On 07/01/2008, Charles R Harris [EMAIL PROTECTED] wrote:
One place where Numpy differs from MatLab is the way memory is handled.
MatLab is always generating new arrays, so for efficiency it is worth
preallocating arrays and
Hi,
I'm forwarding a bug from PyOpenGL. The developer, Mike Fletcher, is
having troubles accessing a numpy scalar with the __array_interface__.
Is this supposed to work? Or should __array_interface__ trigger an
AttributeError on a numpy scalar? Note that I haven't done any digging
on this
On Monday 07 January 2008 02:13:56 pm Charles R Harris wrote:
On Jan 7, 2008 12:00 PM, Travis E. Oliphant [EMAIL PROTECTED]
wrote:
Charles R Harris wrote:
Hi All,
I'm thinking that one way to make the automatic type conversion a
bit safer to use would be to add a CASTABLE flag to
One of my collaborators would like to use 16bit float arrays. According to
http://www.scipy.org/Tentative_NumPy_Tutorial, and references to float16 in
numpy.core.numerictypes, it appears that this should be possible, but the
following doesnt work:
a=arange(10, dtype='float16')
TypeError: data
float16 is not defined in my version of Numpy :(
Matthieu
2008/1/7, Darren Dale [EMAIL PROTECTED]:
One of my collaborators would like to use 16bit float arrays. According to
http://www.scipy.org/Tentative_NumPy_Tutorial, and references to float16
in
numpy.core.numerictypes, it appears that
Darren Dale wrote:
One of my collaborators would like to use 16bit float arrays. According to
http://www.scipy.org/Tentative_NumPy_Tutorial, and references to float16 in
numpy.core.numerictypes, it appears that this should be possible, but the
following doesnt work:
No, it's only
Some days ago there was mentioned a parallel numpy that is developed by
Brian Granger.
Does the project have any blog or website? Has it any description about
API and abilities? When 1st release is intended?
Regards, D
___
Numpy-discussion mailing
Another possible approach is to treat downcasting similar to underflow. That
is give it it's own flag in the errstate and people can set it to ignore,
warn or raise on downcasting as desired. One could potentially have two
flags, one for downcasting across kinds (float-int, int-bool) and one for
Travis E. Oliphant wrote:
Andrew Straw wrote:
Hi,
I'm forwarding a bug from PyOpenGL. The developer, Mike Fletcher, is
having troubles accessing a numpy scalar with the __array_interface__.
Is this supposed to work? Or should __array_interface__ trigger an
AttributeError on a numpy scalar?
On 07/01/2008, Timothy Hochberg [EMAIL PROTECTED] wrote:
I'm fairly dubious about assigning float to ints as is. First off it looks
like a bug magnet to me due to accidentally assigning a floating point value
to a target that one believes to be float but is in fact integer. Second,
C-style
Hello all,
In order to help make things regarding this casting issue more
explicit, let me present the following table of potential down-casts.
(Also, for the record, nobody is proposing automatic up-casting of
any kind. The proposals on the table focus on preventing some or all
implicit
Hi,
On Jan 7, 2008 1:00 PM, Darren Dale [EMAIL PROTECTED] wrote:
One of my collaborators would like to use 16bit float arrays. According to
http://www.scipy.org/Tentative_NumPy_Tutorial, and references to float16
in
numpy.core.numerictypes, it appears that this should be possible, but the
Hi,
On Jan 7, 2008 1:16 PM, Timothy Hochberg [EMAIL PROTECTED] wrote:
Another possible approach is to treat downcasting similar to underflow.
That is give it it's own flag in the errstate and people can set it to
ignore, warn or raise on downcasting as desired. One could potentially have
Andrew Straw wrote:
Travis E. Oliphant wrote:
Andrew Straw wrote:
Hi,
I'm forwarding a bug from PyOpenGL. The developer, Mike Fletcher, is
having troubles accessing a numpy scalar with the __array_interface__.
Is this supposed to work? Or should __array_interface__ trigger an
Hello all,
On my (older) version of numpy (1.0.4.dev3896), I found several
oddities in the handling of assignment of long-integer values to
integer arrays:
In : numpy.array([2**31], dtype=numpy.int8)
---
ValueError
On Monday 07 January 2008 03:53:06 pm Charles R Harris wrote:
Hi,
On Jan 7, 2008 1:00 PM, Darren Dale [EMAIL PROTECTED] wrote:
One of my collaborators would like to use 16bit float arrays. According
to http://www.scipy.org/Tentative_NumPy_Tutorial, and references to
float16 in
Dmitrey,
This work is being funded by a new NASA grant that I have at Tech-X
Corporation where I work. The grant officially begins as of Jan 18th,
so not much has been done as of this point. We have however been
having some design discussions with various people.
Here is a broad overview of
On Monday 07 January 2008 03:09:33 pm Travis E. Oliphant wrote:
Darren Dale wrote:
One of my collaborators would like to use 16bit float arrays. According
to http://www.scipy.org/Tentative_NumPy_Tutorial, and references to
float16 in numpy.core.numerictypes, it appears that this should be
Andrew Straw wrote:
Hi,
I'm forwarding a bug from PyOpenGL. The developer, Mike Fletcher, is
having troubles accessing a numpy scalar with the __array_interface__.
Is this supposed to work? Or should __array_interface__ trigger an
AttributeError on a numpy scalar? Note that I haven't done
The only one thing I'm very interested in for now - why the most
simplest matrix operations are not implemented to be parallel in numpy
yet (for several-CPU computers, like my AMD Athlon X2). First of all
it's related to matrix multiplication and devision, either point or
matrix (i.e. like
dmitrey wrote:
The only one thing I'm very interested in for now - why the most
simplest matrix operations are not implemented to be parallel in numpy
yet (for several-CPU computers, like my AMD Athlon X2). First of all
it's related to matrix multiplication and devision, either point or
2008/1/7, dmitrey [EMAIL PROTECTED]:
The only one thing I'm very interested in for now - why the most
simplest matrix operations are not implemented to be parallel in numpy
yet (for several-CPU computers, like my AMD Athlon X2). First of all
it's related to matrix multiplication and devision,
On Jan 7, 2008 2:00 PM, Charles R Harris [EMAIL PROTECTED] wrote:
Hi,
On Jan 7, 2008 1:16 PM, Timothy Hochberg [EMAIL PROTECTED] wrote:
Another possible approach is to treat downcasting similar to underflow.
That is give it it's own flag in the errstate and people can set it to
Robert Kern wrote:
dmitrey wrote:
The only one thing I'm very interested in for now - why the most
simplest matrix operations are not implemented to be parallel in numpy
yet (for several-CPU computers, like my AMD Athlon X2). First of all
it's related to matrix multiplication and
dmitrey wrote:
The only one thing I'm very interested in for now - why the most
simplest matrix operations are not implemented to be parallel in numpy
yet (for several-CPU computers, like my AMD Athlon X2).
For what it's worth, sometimes I *want* my numpy operations to happen
only on one
Andrew Straw wrote:
dmitrey wrote:
The only one thing I'm very interested in for now - why the most
simplest matrix operations are not implemented to be parallel in numpy
yet (for several-CPU computers, like my AMD Athlon X2).
For what it's worth, sometimes I *want* my numpy
On Jan 7, 2008 10:41 PM, David Cournapeau [EMAIL PROTECTED] wrote:
Hi,
for my work related on scons, I have a branch build_with_scons in
the numpy trunk, which I have initialized exactly as documented on the
numpy wiki (http://projects.scipy.org/scipy/numpy/wiki/MakingBranches).
When I
Fernando Perez wrote:
On Jan 7, 2008 10:41 PM, David Cournapeau [EMAIL PROTECTED] wrote:
Hi,
for my work related on scons, I have a branch build_with_scons in
the numpy trunk, which I have initialized exactly as documented on the
numpy wiki
On Jan 7, 2008 10:54 PM, David Cournapeau [EMAIL PROTECTED] wrote:
I understand this if doing the merge at hand with svn merge (that's what
I did previously), but I am using svnmerge, which is supposed to avoid
all this (I find the whole process extremely error-prone). More
specifically, I am
Fernando Perez wrote:
On Jan 7, 2008 10:54 PM, David Cournapeau [EMAIL PROTECTED] wrote:
I understand this if doing the merge at hand with svn merge (that's what
I did previously), but I am using svnmerge, which is supposed to avoid
all this (I find the whole process extremely
Hi David,
How did you initialize svnmerge ?
Matthieu
2008/1/8, David Cournapeau [EMAIL PROTECTED]:
Fernando Perez wrote:
On Jan 7, 2008 10:41 PM, David Cournapeau [EMAIL PROTECTED]
wrote:
Hi,
for my work related on scons, I have a branch build_with_scons in
the numpy trunk,
Matthieu Brucher wrote:
Hi David,
How did you initialize svnmerge ?
As said in the numpy wiki. More precisely:
- In a svn checkout of the trunk, do svn up to be up to date
- svn copy TRUNK MY_BRANCH
- use svnmerge init MY_BRANCH
- svn ci -F svnmerge-commit.txt
- svn switch
2008/1/8, David Cournapeau [EMAIL PROTECTED]:
Matthieu Brucher wrote:
Hi David,
How did you initialize svnmerge ?
As said in the numpy wiki. More precisely:
- In a svn checkout of the trunk, do svn up to be up to date
- svn copy TRUNK MY_BRANCH
- use svnmerge init MY_BRANCH
2008/1/8, Matthieu Brucher [EMAIL PROTECTED]:
2008/1/8, David Cournapeau [EMAIL PROTECTED]:
Matthieu Brucher wrote:
Hi David,
How did you initialize svnmerge ?
As said in the numpy wiki. More precisely:
- In a svn checkout of the trunk, do svn up to be up to date
-
Matthieu Brucher wrote:
2008/1/8, Matthieu Brucher [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]:
2008/1/8, David Cournapeau [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]:
Matthieu Brucher wrote:
Hi David,
How did you initialize svnmerge ?
Oups, safe for the /trunk:1-2871 part. This should be deleted before
a commit to the trunk, I think.
Yes, that's what I (quite unclearly) meant: since revision numbers are
per- repository in svn, I don't understand the point of tracking trunk
revisions: I would think that tracking the last
49 matches
Mail list logo