[Numpy-discussion] Bugs using complex192

2008-01-07 Thread Matts Bjoerck
Hi, I've started using complex192 for some calculations and came across two things that seems to be bugs: In [1]: sqrt(array([-1.0],dtype = complex192)) Out[1]: array([0.0+-6.1646549e-4703j], dtype=complex192) In [2]: sqrt(array([-1.0],dtype = complex128)) Out[2]: array([ 0.+1.j]) In [3]: x

Re: [Numpy-discussion] Bugs using complex192

2008-01-07 Thread lorenzo bolla
It doesn't work on Windows, either. In [35]: numpy.sqrt(numpy.array([-1.0], dtype=numpy.complex192)) Out[35]: array([0.0+2.9996087e-305j], dtype=complex192) In [36]: numpy.sqrt(numpy.array([-1.0], dtype=numpy.complex128)) Out[36]: array([ 0.+1.j]) In [37]: numpy.__version__ Out[37]:

Re: [Numpy-discussion] Bugs using complex192

2008-01-07 Thread Matthieu Brucher
i, I managed to reproduce your bugs on a FC6 box : import numpy as n n.sqrt(n.array([-1.0],dtype = n.complex192)) array([0.0+9.2747134e+492j], dtype=complex192) n.sqrt(n.array([-1.0],dtype = n.complex128)) array([ 0.+1.j]) x=n.array([0.0+0.0j, 1.0+0.0j], dtype=n.complex192) x

Re: [Numpy-discussion] Bugs using complex192

2008-01-07 Thread Matts Bjoerck
Thanks for the fast replies, now I know it's not my machine that gives me trouble. In the meantime I tested a couple of other functions. It seems that all of them fail with complex192. /Matts In [19]: x192 = arange(0,2,0.5,dtype = complex192)*pi In [20]: x128 = arange(0,2,0.5,dtype =

Re: [Numpy-discussion] Nasty bug using pre-initialized arrays

2008-01-07 Thread Ryan May
Charles R Harris wrote: On Jan 7, 2008 8:47 AM, Ryan May [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Stuart Brorson wrote: I realize NumPy != Matlab, but I'd wager that most users would think that this is the natural behavior.. Well, that behavior won't

Re: [Numpy-discussion] Nasty bug using pre-initialized arrays

2008-01-07 Thread Zachary Pincus
For large arrays, it makes since to do automatic conversions, as is also the case in functions taking output arrays, because the typecast can be pushed down into C where it is time and space efficient, whereas explicitly converting the array uses up temporary space. However, I can imagine an

[Numpy-discussion] CASTABLE flag

2008-01-07 Thread Charles R Harris
Hi All, I'm thinking that one way to make the automatic type conversion a bit safer to use would be to add a CASTABLE flag to arrays. Then we could write something like a[...] = typecast(b) where typecast returns a view of b with the CASTABLE flag set so that the assignment operator can check

Re: [Numpy-discussion] [C++-sig] Overloading sqrt(5.5)*myvector

2008-01-07 Thread Travis E. Oliphant
Bruce Sherwood wrote: Okay, I've implemented the scheme below that was proposed by Scott Daniels on the VPython mailing list, and it solves my problem. It's also much faster than using numpy directly: even with the def and if overhead: sqrt(scalar) is over 3 times faster than the numpy

Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Travis E. Oliphant
Charles R Harris wrote: Hi All, I'm thinking that one way to make the automatic type conversion a bit safer to use would be to add a CASTABLE flag to arrays. Then we could write something like a[...] = typecast(b) where typecast returns a view of b with the CASTABLE flag set so that

Re: [Numpy-discussion] Bugs using complex192

2008-01-07 Thread Nils Wagner
On Mon, 7 Jan 2008 19:42:40 +0100 Francesc Altet [EMAIL PROTECTED] wrote: A Monday 07 January 2008, Nils Wagner escrigué: numpy.sqrt(numpy.array([-1.0], dtype=numpy.complex192)) Traceback (most recent call last): File stdin, line 1, in module AttributeError: 'module' object has no

Re: [Numpy-discussion] Bugs using complex192

2008-01-07 Thread Francesc Altet
A Monday 07 January 2008, Nils Wagner escrigué: numpy.sqrt(numpy.array([-1.0], dtype=numpy.complex192)) Traceback (most recent call last): File stdin, line 1, in module AttributeError: 'module' object has no attribute 'complex192' numpy.__version__ '1.0.5.dev4673' It seems like you

Re: [Numpy-discussion] Nasty bug using pre-initialized arrays

2008-01-07 Thread Anne Archibald
On 07/01/2008, Charles R Harris [EMAIL PROTECTED] wrote: One place where Numpy differs from MatLab is the way memory is handled. MatLab is always generating new arrays, so for efficiency it is worth preallocating arrays and then filling in the parts. This is not the case in Numpy where lists

Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Charles R Harris
On Jan 7, 2008 12:00 PM, Travis E. Oliphant [EMAIL PROTECTED] wrote: Charles R Harris wrote: Hi All, I'm thinking that one way to make the automatic type conversion a bit safer to use would be to add a CASTABLE flag to arrays. Then we could write something like a[...] = typecast(b)

Re: [Numpy-discussion] Nasty bug using pre-initialized arrays

2008-01-07 Thread Charles R Harris
On Jan 7, 2008 12:08 PM, Anne Archibald [EMAIL PROTECTED] wrote: On 07/01/2008, Charles R Harris [EMAIL PROTECTED] wrote: One place where Numpy differs from MatLab is the way memory is handled. MatLab is always generating new arrays, so for efficiency it is worth preallocating arrays and

[Numpy-discussion] Is __array_interface__ supposed to work on numpy scalars?

2008-01-07 Thread Andrew Straw
Hi, I'm forwarding a bug from PyOpenGL. The developer, Mike Fletcher, is having troubles accessing a numpy scalar with the __array_interface__. Is this supposed to work? Or should __array_interface__ trigger an AttributeError on a numpy scalar? Note that I haven't done any digging on this

Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Scott Ransom
On Monday 07 January 2008 02:13:56 pm Charles R Harris wrote: On Jan 7, 2008 12:00 PM, Travis E. Oliphant [EMAIL PROTECTED] wrote: Charles R Harris wrote: Hi All, I'm thinking that one way to make the automatic type conversion a bit safer to use would be to add a CASTABLE flag to

[Numpy-discussion] Does float16 exist?

2008-01-07 Thread Darren Dale
One of my collaborators would like to use 16bit float arrays. According to http://www.scipy.org/Tentative_NumPy_Tutorial, and references to float16 in numpy.core.numerictypes, it appears that this should be possible, but the following doesnt work: a=arange(10, dtype='float16') TypeError: data

Re: [Numpy-discussion] Does float16 exist?

2008-01-07 Thread Matthieu Brucher
float16 is not defined in my version of Numpy :( Matthieu 2008/1/7, Darren Dale [EMAIL PROTECTED]: One of my collaborators would like to use 16bit float arrays. According to http://www.scipy.org/Tentative_NumPy_Tutorial, and references to float16 in numpy.core.numerictypes, it appears that

Re: [Numpy-discussion] Does float16 exist?

2008-01-07 Thread Travis E. Oliphant
Darren Dale wrote: One of my collaborators would like to use 16bit float arrays. According to http://www.scipy.org/Tentative_NumPy_Tutorial, and references to float16 in numpy.core.numerictypes, it appears that this should be possible, but the following doesnt work: No, it's only

[Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread dmitrey
Some days ago there was mentioned a parallel numpy that is developed by Brian Granger. Does the project have any blog or website? Has it any description about API and abilities? When 1st release is intended? Regards, D ___ Numpy-discussion mailing

Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Timothy Hochberg
Another possible approach is to treat downcasting similar to underflow. That is give it it's own flag in the errstate and people can set it to ignore, warn or raise on downcasting as desired. One could potentially have two flags, one for downcasting across kinds (float-int, int-bool) and one for

Re: [Numpy-discussion] Is __array_interface__ supposed to work on numpy scalars?

2008-01-07 Thread Andrew Straw
Travis E. Oliphant wrote: Andrew Straw wrote: Hi, I'm forwarding a bug from PyOpenGL. The developer, Mike Fletcher, is having troubles accessing a numpy scalar with the __array_interface__. Is this supposed to work? Or should __array_interface__ trigger an AttributeError on a numpy scalar?

Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Anne Archibald
On 07/01/2008, Timothy Hochberg [EMAIL PROTECTED] wrote: I'm fairly dubious about assigning float to ints as is. First off it looks like a bug magnet to me due to accidentally assigning a floating point value to a target that one believes to be float but is in fact integer. Second, C-style

Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Zachary Pincus
Hello all, In order to help make things regarding this casting issue more explicit, let me present the following table of potential down-casts. (Also, for the record, nobody is proposing automatic up-casting of any kind. The proposals on the table focus on preventing some or all implicit

Re: [Numpy-discussion] Does float16 exist?

2008-01-07 Thread Charles R Harris
Hi, On Jan 7, 2008 1:00 PM, Darren Dale [EMAIL PROTECTED] wrote: One of my collaborators would like to use 16bit float arrays. According to http://www.scipy.org/Tentative_NumPy_Tutorial, and references to float16 in numpy.core.numerictypes, it appears that this should be possible, but the

Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Charles R Harris
Hi, On Jan 7, 2008 1:16 PM, Timothy Hochberg [EMAIL PROTECTED] wrote: Another possible approach is to treat downcasting similar to underflow. That is give it it's own flag in the errstate and people can set it to ignore, warn or raise on downcasting as desired. One could potentially have

Re: [Numpy-discussion] Is __array_interface__ supposed to work on numpy scalars?

2008-01-07 Thread Travis E. Oliphant
Andrew Straw wrote: Travis E. Oliphant wrote: Andrew Straw wrote: Hi, I'm forwarding a bug from PyOpenGL. The developer, Mike Fletcher, is having troubles accessing a numpy scalar with the __array_interface__. Is this supposed to work? Or should __array_interface__ trigger an

[Numpy-discussion] Unexpected integer overflow handling

2008-01-07 Thread Zachary Pincus
Hello all, On my (older) version of numpy (1.0.4.dev3896), I found several oddities in the handling of assignment of long-integer values to integer arrays: In : numpy.array([2**31], dtype=numpy.int8) --- ValueError

Re: [Numpy-discussion] Does float16 exist?

2008-01-07 Thread Darren Dale
On Monday 07 January 2008 03:53:06 pm Charles R Harris wrote: Hi, On Jan 7, 2008 1:00 PM, Darren Dale [EMAIL PROTECTED] wrote: One of my collaborators would like to use 16bit float arrays. According to http://www.scipy.org/Tentative_NumPy_Tutorial, and references to float16 in

Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread Brian Granger
Dmitrey, This work is being funded by a new NASA grant that I have at Tech-X Corporation where I work. The grant officially begins as of Jan 18th, so not much has been done as of this point. We have however been having some design discussions with various people. Here is a broad overview of

Re: [Numpy-discussion] Does float16 exist?

2008-01-07 Thread Darren Dale
On Monday 07 January 2008 03:09:33 pm Travis E. Oliphant wrote: Darren Dale wrote: One of my collaborators would like to use 16bit float arrays. According to http://www.scipy.org/Tentative_NumPy_Tutorial, and references to float16 in numpy.core.numerictypes, it appears that this should be

Re: [Numpy-discussion] Is __array_interface__ supposed to work on numpy scalars?

2008-01-07 Thread Travis E. Oliphant
Andrew Straw wrote: Hi, I'm forwarding a bug from PyOpenGL. The developer, Mike Fletcher, is having troubles accessing a numpy scalar with the __array_interface__. Is this supposed to work? Or should __array_interface__ trigger an AttributeError on a numpy scalar? Note that I haven't done

Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread dmitrey
The only one thing I'm very interested in for now - why the most simplest matrix operations are not implemented to be parallel in numpy yet (for several-CPU computers, like my AMD Athlon X2). First of all it's related to matrix multiplication and devision, either point or matrix (i.e. like

Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread Robert Kern
dmitrey wrote: The only one thing I'm very interested in for now - why the most simplest matrix operations are not implemented to be parallel in numpy yet (for several-CPU computers, like my AMD Athlon X2). First of all it's related to matrix multiplication and devision, either point or

Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread Matthieu Brucher
2008/1/7, dmitrey [EMAIL PROTECTED]: The only one thing I'm very interested in for now - why the most simplest matrix operations are not implemented to be parallel in numpy yet (for several-CPU computers, like my AMD Athlon X2). First of all it's related to matrix multiplication and devision,

Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Timothy Hochberg
On Jan 7, 2008 2:00 PM, Charles R Harris [EMAIL PROTECTED] wrote: Hi, On Jan 7, 2008 1:16 PM, Timothy Hochberg [EMAIL PROTECTED] wrote: Another possible approach is to treat downcasting similar to underflow. That is give it it's own flag in the errstate and people can set it to

Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread eric jones
Robert Kern wrote: dmitrey wrote: The only one thing I'm very interested in for now - why the most simplest matrix operations are not implemented to be parallel in numpy yet (for several-CPU computers, like my AMD Athlon X2). First of all it's related to matrix multiplication and

Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread Andrew Straw
dmitrey wrote: The only one thing I'm very interested in for now - why the most simplest matrix operations are not implemented to be parallel in numpy yet (for several-CPU computers, like my AMD Athlon X2). For what it's worth, sometimes I *want* my numpy operations to happen only on one

Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread David Cournapeau
Andrew Straw wrote: dmitrey wrote: The only one thing I'm very interested in for now - why the most simplest matrix operations are not implemented to be parallel in numpy yet (for several-CPU computers, like my AMD Athlon X2). For what it's worth, sometimes I *want* my numpy

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread Fernando Perez
On Jan 7, 2008 10:41 PM, David Cournapeau [EMAIL PROTECTED] wrote: Hi, for my work related on scons, I have a branch build_with_scons in the numpy trunk, which I have initialized exactly as documented on the numpy wiki (http://projects.scipy.org/scipy/numpy/wiki/MakingBranches). When I

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread David Cournapeau
Fernando Perez wrote: On Jan 7, 2008 10:41 PM, David Cournapeau [EMAIL PROTECTED] wrote: Hi, for my work related on scons, I have a branch build_with_scons in the numpy trunk, which I have initialized exactly as documented on the numpy wiki

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread Fernando Perez
On Jan 7, 2008 10:54 PM, David Cournapeau [EMAIL PROTECTED] wrote: I understand this if doing the merge at hand with svn merge (that's what I did previously), but I am using svnmerge, which is supposed to avoid all this (I find the whole process extremely error-prone). More specifically, I am

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread David Cournapeau
Fernando Perez wrote: On Jan 7, 2008 10:54 PM, David Cournapeau [EMAIL PROTECTED] wrote: I understand this if doing the merge at hand with svn merge (that's what I did previously), but I am using svnmerge, which is supposed to avoid all this (I find the whole process extremely

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread Matthieu Brucher
Hi David, How did you initialize svnmerge ? Matthieu 2008/1/8, David Cournapeau [EMAIL PROTECTED]: Fernando Perez wrote: On Jan 7, 2008 10:41 PM, David Cournapeau [EMAIL PROTECTED] wrote: Hi, for my work related on scons, I have a branch build_with_scons in the numpy trunk,

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread David Cournapeau
Matthieu Brucher wrote: Hi David, How did you initialize svnmerge ? As said in the numpy wiki. More precisely: - In a svn checkout of the trunk, do svn up to be up to date - svn copy TRUNK MY_BRANCH - use svnmerge init MY_BRANCH - svn ci -F svnmerge-commit.txt - svn switch

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread Matthieu Brucher
2008/1/8, David Cournapeau [EMAIL PROTECTED]: Matthieu Brucher wrote: Hi David, How did you initialize svnmerge ? As said in the numpy wiki. More precisely: - In a svn checkout of the trunk, do svn up to be up to date - svn copy TRUNK MY_BRANCH - use svnmerge init MY_BRANCH

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread Matthieu Brucher
2008/1/8, Matthieu Brucher [EMAIL PROTECTED]: 2008/1/8, David Cournapeau [EMAIL PROTECTED]: Matthieu Brucher wrote: Hi David, How did you initialize svnmerge ? As said in the numpy wiki. More precisely: - In a svn checkout of the trunk, do svn up to be up to date -

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread David Cournapeau
Matthieu Brucher wrote: 2008/1/8, Matthieu Brucher [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]: 2008/1/8, David Cournapeau [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]: Matthieu Brucher wrote: Hi David, How did you initialize svnmerge ?

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread Matthieu Brucher
Oups, safe for the /trunk:1-2871 part. This should be deleted before a commit to the trunk, I think. Yes, that's what I (quite unclearly) meant: since revision numbers are per- repository in svn, I don't understand the point of tracking trunk revisions: I would think that tracking the last