Re: [Numpy-discussion] (apparent) infinite loop in LAPACK/ATLAS

2009-11-04 Thread David Cournapeau
On Wed, Nov 4, 2009 at 2:55 PM, David Warde-Farley d...@cs.toronto.edu wrote:
 Hi all (mostly David C. since he probably knows all this horrible
 stuff),

 I noticed on my new laptop (with an Atom N280 in it) that when I run
 numpy.test() about the 34th test would loop, seemingly forever.

 Finding this a little odd, I tried an SVD on a small matrix and
 observed the same behaviour, narrowed it down with gdb to what seems
 to be an infinite loop involving dlamc1_ and dlamc3_,

Did you compile them without any optimized as suggested in the
makefiles ? NOOPT should not contain -O option

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.4.0: Setting a firm release date for 1st december.

2009-11-04 Thread Ralf Gommers
On Mon, Nov 2, 2009 at 9:29 AM, David Cournapeau courn...@gmail.com wrote:

 Hi,

 I think it is about time to release 1.4.0. Instead of proposing a
 release date, I am setting a firm date for 1st December, and 16th
 november to freeze the trunk. If someone wants a different date, you
 have to speak now.

 There are a few issues I would like to clear up:
  - Documentation for datetime, in particular for the public C API
  - Snow Leopard issues, if any

 Otherwise, I think there has been quite a lot of new features. If
 people want to add new functionalities or features, please do it soon,


It would be good if we could also have one more merge of the work in the doc
editor (close to 300 new/changed docstrings now). I can have it all reviewed
by the 13th.

Unless you object, I'd also like to include the distutils docs. Complete
docs with some possible minor inaccuracies is better than no docs.

Cheers,
Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.4.0: Setting a firm release date for 1st december.

2009-11-04 Thread Jarrod Millman
On Wed, Nov 4, 2009 at 1:37 AM, Ralf Gommers
ralf.gomm...@googlemail.com wrote:
 It would be good if we could also have one more merge of the work in the doc
 editor (close to 300 new/changed docstrings now). I can have it all reviewed
 by the 13th.

That would be great.  Thanks for taking care of that.

 Unless you object, I'd also like to include the distutils docs. Complete
 docs with some possible minor inaccuracies is better than no docs.

+1

-- 
Jarrod Millman
Helen Wills Neuroscience Institute
10 Giannini Hall, UC Berkeley
http://cirl.berkeley.edu/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] (apparent) infinite loop in LAPACK/ATLAS

2009-11-04 Thread David Warde-Farley
Hi David,

On 4-Nov-09, at 4:23 AM, David Cournapeau wrote:

 Did you compile them without any optimized as suggested in the
 makefiles ? NOOPT should not contain -O option

Yup, it contained -O0 -fPIC (-O0 I think is in fact more strict than  
having no -O option?). Have you seen this problem with broken NOOPT?

I'm not certain yet, but it looks like some combination of gcc 4.4 and  
the Atom microarchitecture reintroduces this old bug, as -ffloat- 
store seems to have fixed it.

numpy.test() runs through just fine once I recompiled LAPACK and then  
ATLAS as well (just rebuilding the lapack_LINUX.a and then running  
'make' in the ATLAS obj dir didn't seem to do it, so I had to wait  
through several hours of tuning again).

For anyone who might stumble upon this problem again, just to be safe  
I added -ffloat-store to both the flag lists in LAPACK's make.inc, and  
also used -Fa fi '-ffloat-store' when running configure with ATLAS.   
If this is indeed either a gcc 4.4 regression or an Atom-specific  
quirk it might start popping up more and more around here.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] strange performance on mac 2.5/2.6 32/64 bit

2009-11-04 Thread George Nurser
2009/11/3 Robin robi...@gmail.com:
 On Tue, Nov 3, 2009 at 6:14 PM, Robin robi...@gmail.com wrote:
 After some more pootling about I figured out a lot of the performance
 loss comes from using 32 bit integers by default when compiles 64 bit.
 I asked this question on stackoverflow:
 http://stackoverflow.com/questions/1668899/fortran-32-bit-64-bit-performance-portability

This seems surprising -- our HPC fortran codes use 32 bit integers on
64 bit linux. Do you get a performance hit in a pure fortran program?
Is it a problem with the gfortran compiler perhaps?

 is there any way to use fortran with f2py from python in a way that
 doesn't require the code to be changed depending on platform?

 Including the -DF2PY_REPORT_ON_ARRAY_COPY option showed that the big
 performance hit was from f2py copying the arrays to cast from 64 bit
 to 32 bit.

Fortran 90 introduced the INTERFACE block, which allows you to use
different variable types as arguments to what appears externally to be
the same routine. It then feeds the arguments to the appropriate
version of the routine. I don't think f2py supports this, but it would
be really useful if it could.

Regards, George Nurser.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] persistent ImportError: No module named multiarray when moving cPickle files between machines

2009-11-04 Thread Reckoner
Here's an example:

On winxp 64-bit:

Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on
win32
Type help, copyright, credits or license for more information.
 import numpy
 import cPickle
 a = numpy.eye(10)
 cPickle.dump(a,open('from32bitxp.pkl','w'))
 import numpy.core.multiarray
 numpy.__version__
'1.0.4'


On linux 64 bit:

Python 2.5.4 (r254:67916, Feb  5 2009, 19:52:35)
[GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2
Type help, copyright, credits or license for more information.
 import numpy
 import cPickle
 cPickle.load(open('from32bitxp.pkl'))
Traceback (most recent call last):
  File stdin, line 1, in module
ImportError: No module named multiarray
 numpy.__version__
'1.2.1'
 import numpy.core.multiarray


Note that I transfer the from32bitxp file from the winxp32 machine to
the linux host. Also, I've tried this with version 1.3 on winxp and
get the same problem on the linux host.

Here's more interesting info:

On linux:

 a = numpy.eye(10)
 cPickle.dump(a,open('from64bitLinux.pkl','w'))

upon transferring the file to winxp 32 and on winxp32:

 cPickle.load(open('from64bitLinux.pkl'))

See? No problem going from linux to winxp32; but problems going the other way.

Please let me know if you need more info on this.

Any help appreciated.

On Tue, Nov 3, 2009 at 4:55 AM, Bruce Southey bsout...@gmail.com wrote:
 On Mon, Nov 2, 2009 at 6:31 PM, Reckoner recko...@gmail.com wrote:
 thanks for the suggestion! I will look into it. The other thing is
 that the numpy arrays in question are actually embedded in another
 object. When I convert the numpy arrays into plain lists, and then
 cPickle them, there is no problem with any of the larger objects. That
 is the way we are currently working around this issue.

 Thanks again.

 On Mon, Nov 2, 2009 at 2:43 PM, Bruce Southey bsout...@gmail.com wrote:
 On Mon, Nov 2, 2009 at 2:42 PM, Reckoner recko...@gmail.com wrote:
 Anybody have any ideas here?

 Otherwise, I'm thinking this should be posted to the numpy bugs list.
 What's the best way to report a bug of this kind?

 Thanks!

 On Fri, Oct 30, 2009 at 5:48 PM, Reckoner recko...@gmail.com wrote:
 Robert Kern wrote:
 You can import numpy.core.multiarray on both machines?

 Yes. For each machine separately, you can cPickle files with numpy
 arrays without problems loading/dumping. The problem comes from
 transferring the win32 cPickle'd files to Linux 64 bit and then trying
 to load them. Transferring cPickle'd files that do *not* have numpy
 arrays work as expected. In other words, cPICKLE'd lists transfer fine
 back and forth between the two machines. In fact, we currently get
 around this problem by converting the numpy arrays to lists,
 transferring them, and then re-numpy-ing them on the respective hosts

 thanks.


 On Fri, Oct 30, 2009 at 11:13 AM, Reckoner recko...@gmail.com wrote:
 Hi,

 % python -c 'import numpy.core.multiarray'

 works just fine, but when I try to load a file that I have transferred
 from another machine running Windows to one running Linux, I get:

 %  python -c 'import cPickle;a=cPickle.load(open(matrices.pkl))'

 Traceback (most recent call last):
  File string, line 1, in module
 ImportError: No module named multiarray

 otherwise, cPickle works normally when transferring files that *do*
 not contain numpy arrays.

 I am using version 1.2 on both machines. It's not so easy for me to
 change versions, by the way, since this is the version that my working
 group has decided on to standardize on for this effort.


 Any help appreciated.


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 Have you have tried the other Cookbook approaches:
 http://www.scipy.org/Cookbook/InputOutput
 Like using numpy's own array io functions - load/save(z)?
 (seems to work between 64-bit Windows 7 and 64-bit Linux - each has
 different numpy versions)

 Bruce
 ___

 Can you provide you provide a small self-contained example of the
 problem including object creation especially as your example does not
 import numpy?

 Really you have to start at the beginning (like pickling and
 transferring numpy arrays) and then increase the complexity to include
 the object.


 Bruce
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py-users list not working

2009-11-04 Thread George Nurser
Fortran can accept preprocessor directives, but f2py cannot.
You first need to preprocess a .F (or .F90) file to create a .f (or
.f90) file which you then pass to f2py
The way I preprocess the .F file is to have statements like
int int*INTSIZE :: i,j,k

So preprocess file.F e.g. in gfortran with
gfortran -E -DINTSIZE=8 file.F  -o outdir/file.f

The outdir is necessary in a case-insensitive file system (like
default mac OSX) to prevent the .f files overwriting the .F file.
Alternatively, it may be possible to use some other suffix than .f,
but I've not tried that.

Then f2py file.f

George Nurser.

 - Original message -
 Subject: writing module for 32/64 bit
 From: Robin robi...@gmail.com
 To: f2py-us...@cens.ioc.ee
 Content-Type: text/plain; charset=ISO-8859-1
 Content-Transfer-Encoding: quoted-printable

 Hi,

 I would like to write some subroutines in fortran involving integers
 and distribute them in a small package.  Users might be on 32 bit or
 64 bit.

 Is there an easy or recommended way to approach this? Ideally I would
 like to work with the native integer type - but if I use 'integer' in
 fortran it is always 32 bit and f2py converts input aways when numpy
 is 64 bit. If I use integer*8 in the code then its fine for 64 bit,
 but on 32 bit platforms, both f2py has to convert and its not the
 native integer size.

 What I (think) I'd like to do is be able to use the native platform
 integer type, like numpy does, and then not worry about. I found there
 are options like -fdefault-int-8 for gfortran, but when I add that
 stuff breaks (bus error) since I guess f2py doesn't know about it and
 is converting and passing 32 bits anyway.

 Is there any way around this, or do I need to maintain 2 different
 versions with different fortran code for 32bit/64bit? Or is it
 possible to acheive something like this with preprossor #ifdefs? (not
 sure how this works with fortran, or if f2py would be aware of it).

 Cheers

 Robin
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] arrays comparison issue

2009-11-04 Thread Jean-Luc Menut
Hello all,



If if define 2 variables a and b by doing the following :

on [5]: a
Out[5]: array([ 1.7])

In [6]: b=array([0.8])+array([0.9])

In [7]: b
Out[7]: array([ 1.7])


if I test the equality of a and b, instead to obatin True, I have :
In [8]: a==b
Out[8]: array([False], dtype=bool)

I know it not related only to numpy but also to python.
How do you tackle this problem ?

(I also tried on IDL and it works well : I guess it truncate the number).

Thanks ,

Jean-Luc
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] arrays comparison issue

2009-11-04 Thread Lev Givon
Received from Jean-Luc Menut on Wed, Nov 04, 2009 at 09:52:15AM EST:
 Hello all,
 
 
 
 If if define 2 variables a and b by doing the following :
 
 on [5]: a
 Out[5]: array([ 1.7])
 
 In [6]: b=array([0.8])+array([0.9])
 
 In [7]: b
 Out[7]: array([ 1.7])
 
 
 if I test the equality of a and b, instead to obatin True, I have :
 In [8]: a==b
 Out[8]: array([False], dtype=bool)
 
 I know it not related only to numpy but also to python.
 How do you tackle this problem ?
 
 (I also tried on IDL and it works well : I guess it truncate the number).
 
 Thanks ,

Try using the numpy.allclose function (with a suitable tolerance) to
compare the arrays.

L.G.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] arrays comparison issue

2009-11-04 Thread David Cournapeau
On Wed, Nov 4, 2009 at 11:52 PM, Jean-Luc Menut jeanluc.me...@free.fr wrote:
 Hello all,



 If if define 2 variables a and b by doing the following :

 on [5]: a
 Out[5]: array([ 1.7])

 In [6]: b=array([0.8])+array([0.9])

 In [7]: b
 Out[7]: array([ 1.7])


 if I test the equality of a and b, instead to obatin True, I have :

You should never test for strict equality with floating point numbers
except for particular situations. You should instead test whether they
are close in some sense (up to N decimal, for example).

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] persistent ImportError: No module named multiarray when moving cPickle files between machines

2009-11-04 Thread Charles R Harris
On Wed, Nov 4, 2009 at 7:06 AM, Reckoner recko...@gmail.com wrote:

 Here's an example:

 On winxp 64-bit:

 Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit
 (Intel)] on
 win32
 Type help, copyright, credits or license for more information.
  import numpy
  import cPickle
  a = numpy.eye(10)
  cPickle.dump(a,open('from32bitxp.pkl','w'))
  import numpy.core.multiarray
  numpy.__version__
 '1.0.4'
 

 On linux 64 bit:

 Python 2.5.4 (r254:67916, Feb  5 2009, 19:52:35)
 [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2
 Type help, copyright, credits or license for more information.
  import numpy
  import cPickle
  cPickle.load(open('from32bitxp.pkl'))
 Traceback (most recent call last):
   File stdin, line 1, in module
 ImportError: No module named multiarray
  numpy.__version__
 '1.2.1'
  import numpy.core.multiarray
 


I wonder if this is a numpy version problem. Do you have a windows machine
with a more recent version of numpy on it?

snip

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py-users list not working

2009-11-04 Thread Robin
On Wed, Nov 4, 2009 at 2:38 PM, George Nurser gnur...@googlemail.com wrote:
 Fortran can accept preprocessor directives, but f2py cannot.
 You first need to preprocess a .F (or .F90) file to create a .f (or
 .f90) file which you then pass to f2py
 The way I preprocess the .F file is to have statements like
 int int*INTSIZE :: i,j,k

 So preprocess file.F e.g. in gfortran with
 gfortran -E -DINTSIZE=8 file.F  -o outdir/file.f

 The outdir is necessary in a case-insensitive file system (like
 default mac OSX) to prevent the .f files overwriting the .F file.
 Alternatively, it may be possible to use some other suffix than .f,
 but I've not tried that.

 Then f2py file.f

That's great thanks very much! It's more or less exactly what I was hoping for.

I wonder if it's possible to get distutils to do the preprocess step
from a setup.py script?

Cheers

Robin
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] persistent ImportError: No module named multiarray when moving cPickle files between machines

2009-11-04 Thread Reckoner
Thanks for your reply.

No. I just tried it with the latest Windows XP 32-bit version

Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit (Intel)] on
win32
Type help, copyright, credits or license for more information.
 import numpy
 numpy.__version__
'1.3.0'

Same result on the Linux side:
ImportError: No module named multiarray.

Thanks!


On Wed, Nov 4, 2009 at 7:17 AM, Charles R Harris
charlesr.har...@gmail.com wrote:


 On Wed, Nov 4, 2009 at 7:06 AM, Reckoner recko...@gmail.com wrote:

 Here's an example:

 On winxp 64-bit:

 Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit
 (Intel)] on
 win32
 Type help, copyright, credits or license for more information.
  import numpy
  import cPickle
  a = numpy.eye(10)
  cPickle.dump(a,open('from32bitxp.pkl','w'))
  import numpy.core.multiarray
  numpy.__version__
 '1.0.4'
 

 On linux 64 bit:

 Python 2.5.4 (r254:67916, Feb  5 2009, 19:52:35)
 [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2
 Type help, copyright, credits or license for more information.
  import numpy
  import cPickle
  cPickle.load(open('from32bitxp.pkl'))
 Traceback (most recent call last):
  File stdin, line 1, in module
 ImportError: No module named multiarray
  numpy.__version__
 '1.2.1'
  import numpy.core.multiarray
 


 I wonder if this is a numpy version problem. Do you have a windows machine
 with a more recent version of numpy on it?

 snip

 Chuck


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy-Discussion Digest, Vol 38, Issue 11

2009-11-04 Thread Rick White
The difference between IDL and numpy is that IDL uses single precision  
floats by default while numpy uses doubles.  If you try it with  
doubles in IDL, you will see that it also returns false.

As David Cournapeau said, you should not expect different floating  
point arithmetic operations to give exactly the same result.  It's  
just luck if they do.  E.g., in IDL you'll find that 0.1+0.6 is not  
equal to 0.7.  That's because 0.1 and 0.6 are not exactly  
representable as floats.

On Nov 4, 2009, Jean-Luc Menut wrote:

 Hello all,



 If if define 2 variables a and b by doing the following :

 on [5]: a
 Out[5]: array([ 1.7])

 In [6]: b=array([0.8])+array([0.9])

 In [7]: b
 Out[7]: array([ 1.7])


 if I test the equality of a and b, instead to obatin True, I have :
 In [8]: a==b
 Out[8]: array([False], dtype=bool)

 I know it not related only to numpy but also to python.
 How do you tackle this problem ?

 (I also tried on IDL and it works well : I guess it truncate the  
 number).

 Thanks ,

 Jean-Luc

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Automatic string length in recarray

2009-11-04 Thread Thomas Robitaille


Pierre GM-2 wrote:
 
 As a workwaround, perhaps you could use np.object instead of np.str  
 while defining your array. You can then get the maximum string length  
 by looping, as David suggested, and then use .astype to transform your  
 array...
 

I tried this:

np.rec.fromrecords([(1,'hello'),(2,'world')],dtype=[('a',np.int8),('b',np.object_)])

but I get a TypeError:

---
TypeError Traceback (most recent call last)

/Users/tom/ipython console in module()

/Users/tom/Library/Python/2.6/site-packages/numpy/core/records.pyc in
fromrecords(recList, dtype, shape, formats, names, titles, aligned,
byteorder)
625 res = retval.view(recarray)
626 
-- 627 res.dtype = sb.dtype((record, res.dtype))
628 return res
629 

/Users/tom/Library/Python/2.6/site-packages/numpy/core/records.pyc in
__setattr__(self, attr, val)
432 if attr not in fielddict:
433 exctype, value = sys.exc_info()[:2]
-- 434 raise exctype, value
435 else:
436 fielddict =
ndarray.__getattribute__(self,'dtype').fields or {}

TypeError: Cannot change data-type for object array.

Is this a bug?

Thanks,

Thomas
-- 
View this message in context: 
http://old.nabble.com/Automatic-string-length-in-recarray-tp26174810p26199762.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] arrays comparison issue

2009-11-04 Thread Jean-Luc Menut
thanks to everybody
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Solaris Sparc build broken

2009-11-04 Thread Michael Droettboom
I'm getting the following from r7603 on Solaris Sparc -- somehow related 
to not having a long double version of next after available.  I realise 
not everyone has access to (or is dependent on) this platform, so I'm 
willing to help in whatever way I can, I'm just not sure I understand 
the change yet.

compile options: '-Inumpy/core/include 
-Ibuild/src.solaris-2.8-sun4u-2.5/numpy/core/include/numpy 
-Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath 
-Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/include 
-I/usr/stsci/pyssg/Python-2.5.1/include/python2.5 
-Ibuild/src.solaris-2.8-sun4u-2.5/numpy/core/src/multiarray 
-Ibuild/src.solaris-2.8-sun4u-2.5/numpy/core/src/umath -c'
cc: numpy/core/src/npymath/ieee754.c
numpy/core/src/npymath/ieee754.c, line 172: #error: Needs nextafterl 
implementation for this platform
cc: acomp failed for numpy/core/src/npymath/ieee754.c
numpy/core/src/npymath/ieee754.c, line 172: #error: Needs nextafterl 
implementation for this platform
cc: acomp failed for numpy/core/src/npymath/ieee754.c

Cheers,
Mike

-- 
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Automatic string length in recarray

2009-11-04 Thread Dan Yamins
On Tue, Nov 3, 2009 at 11:43 AM, David Warde-Farley d...@cs.toronto.eduwrote:

 On 2-Nov-09, at 11:35 PM, Thomas Robitaille wrote:

  But if I want to specify the data types:
 
  np.rec.fromrecords([(1,'hello'),(2,'world')],dtype=[('a',np.int8),
  ('b',np.str)])
 
  the string field is set to a length of zero:
 
  rec.array([(1, ''), (2, '')], dtype=[('a', '|i1'), ('b', '|S0')])
 
  I need to specify datatypes for all numerical types since I care about
  int8/16/32, etc, but I would like to benefit from the auto string
  length detection that works if I don't specify datatypes. I tried
  replacing np.str by None but no luck. I know I can specify '|S5' for
  example, but I don't know in advance what the string length should be
  set to.

 This is a limitation of the way the dtype code works, and AFAIK
 there's no easy fix. In some code I wrote recently I had to loop
 through the entire list of records i.e. max(len(foo[2]) for foo in
 records).


Not to shamelessly plug my own project ... but more robust string type
detection is one of the features  of Tabular (
http://bitbucket.org/elaine/tabular/), and is one of the (kinds of) reasons
we wrote the package.  Perhaps using Tabular could be useful to you?

Dan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] using FortranFile to read data from a binary Fortran file

2009-11-04 Thread Neil Martinsen-Burrell
On 2009-11-03 20:18 , Brennan Williams wrote:
 ok I took a closer look at FortranFile and I'm now doing the following.
 Note that the first line in the file I'm reading
 has two double precision reals/floats followed by 8 32 bit integers.

f=FortranFile(fullfilename,endian='')
if f:
  hdr=f.readString()
  print 'hdr=',hdr
  print 'len=',len(hdr)
  t=struct.unpack('2d',hdr[0:16])
  print 't=',t
  i=struct.unpack('8i',hdr[16:])
  print 'i=',i

 This gives me...

 len=48
 t=(0.0,2000.0)
 i=(0,0,0,5,0,0,1,213)

 which is correct.

 So is that the best way to do it, i.e. if I have a line of mixed data
 types, use readString and then do my own unpacking?

That's correct.  FortranFile works most readily with records (equivalent 
to individual write statements) that are of uniform types and 
precisions.  This is a leftover from the way that my own Fortran codes 
were doing I/O.  To solve the problem correctly in FortranFile requires 
a way to specify the sequence of types to expect in a single record. 
This could then give the equivalent of what you have done above, which 
could also be written in a single unpack call

ans = struct.unpack('2d8i',hdr); t=ans[:2];i=ans[2:])

The readString method just takes care of stripping off and 
error-checking the record length information that fortran unformatted 
I/O often uses.  I don't have much opportunity to work on Fortran 
unformatted I/O these days, but I would gladly accept any contributions.

-Neil

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py-users list not working

2009-11-04 Thread George Nurser
2009/11/4 Robin robi...@gmail.com:
 On Wed, Nov 4, 2009 at 2:38 PM, George Nurser gnur...@googlemail.com wrote:
 Fortran can accept preprocessor directives, but f2py cannot.
 You first need to preprocess a .F (or .F90) file to create a .f (or
 .f90) file which you then pass to f2py
 The way I preprocess the .F file is to have statements like
 int int*INTSIZE :: i,j,k

 So preprocess file.F e.g. in gfortran with
 gfortran -E -DINTSIZE=8 file.F  -o outdir/file.f

 The outdir is necessary in a case-insensitive file system (like
 default mac OSX) to prevent the .f files overwriting the .F file.
 Alternatively, it may be possible to use some other suffix than .f,
 but I've not tried that.

 Then f2py file.f

 That's great thanks very much! It's more or less exactly what I was hoping 
 for.

 I wonder if it's possible to get distutils to do the preprocess step
 from a setup.py script?

If it's just search-and-replace, of course python can do it directly

--George
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Automatic string length in recarray

2009-11-04 Thread Pierre GM

On Nov 4, 2009, at 11:35 AM, Thomas Robitaille wrote:



 Pierre GM-2 wrote:

 As a workwaround, perhaps you could use np.object instead of np.str
 while defining your array. You can then get the maximum string length
 by looping, as David suggested, and then use .astype to transform  
 your
 array...


 I tried this:

 np.rec.fromrecords([(1,'hello'),(2,'world')],dtype=[('a',np.int8), 
 ('b',np.object_)])

 but I get a TypeError:

Confirmed, it's a bug all right. Would you mind opening a ticket ?  
I'll try to take care of that in the next few days.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Solaris Sparc build broken

2009-11-04 Thread David Cournapeau
On Thu, Nov 5, 2009 at 2:15 AM, Michael Droettboom md...@stsci.edu wrote:
 I'm getting the following from r7603 on Solaris Sparc -- somehow related
 to not having a long double version of next after available.  I realise
 not everyone has access to (or is dependent on) this platform, so I'm
 willing to help in whatever way I can, I'm just not sure I understand
 the change yet.

The only way to implement nextafter that I know of requires to know
the exact representation of the floating point number, and long double
is unfortunately platform dependent.

What is the long double format on solaris sparc ? (big endian I
suppose, but how many bits for the mantissa and  exponent ? Does it
follow IEER754 ?)

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Automatic string length in recarray

2009-11-04 Thread Thomas Robitaille


Pierre GM-2 wrote:
 
 Confirmed, it's a bug all right. Would you mind opening a ticket ?  
 I'll try to take care of that in the next few days.
 

Done - http://projects.scipy.org/numpy/ticket/1283

Thanks!

Thomas

-- 
View this message in context: 
http://old.nabble.com/Automatic-string-length-in-recarray-tp26174810p26203110.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Solaris Sparc build broken

2009-11-04 Thread Charles R Harris
On Wed, Nov 4, 2009 at 12:11 PM, David Cournapeau courn...@gmail.comwrote:

 On Thu, Nov 5, 2009 at 2:15 AM, Michael Droettboom md...@stsci.edu
 wrote:
  I'm getting the following from r7603 on Solaris Sparc -- somehow related
  to not having a long double version of next after available.  I realise
  not everyone has access to (or is dependent on) this platform, so I'm
  willing to help in whatever way I can, I'm just not sure I understand
  the change yet.

 The only way to implement nextafter that I know of requires to know
 the exact representation of the floating point number, and long double
 is unfortunately platform dependent.

 What is the long double format on solaris sparc ? (big endian I
 suppose, but how many bits for the mantissa and  exponent ? Does it
 follow IEER754 ?)


Long double on SPARC is quad precision, which I believe *is* in one of the
ieee specs. In any case, it has a lot more precision than the extended
precision found in Intel derived architectures. Hmm, I wonder what ia64
does?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Solaris Sparc build broken

2009-11-04 Thread Charles R Harris
On Wed, Nov 4, 2009 at 12:35 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:



 On Wed, Nov 4, 2009 at 12:11 PM, David Cournapeau courn...@gmail.comwrote:

 On Thu, Nov 5, 2009 at 2:15 AM, Michael Droettboom md...@stsci.edu
 wrote:
  I'm getting the following from r7603 on Solaris Sparc -- somehow related
  to not having a long double version of next after available.  I realise
  not everyone has access to (or is dependent on) this platform, so I'm
  willing to help in whatever way I can, I'm just not sure I understand
  the change yet.

 The only way to implement nextafter that I know of requires to know
 the exact representation of the floating point number, and long double
 is unfortunately platform dependent.

 What is the long double format on solaris sparc ? (big endian I
 suppose, but how many bits for the mantissa and  exponent ? Does it
 follow IEER754 ?)


 Long double on SPARC is quad precision, which I believe *is* in one of the
 ieee specs. In any case, it has a lot more precision than the extended
 precision found in Intel derived architectures. Hmm, I wonder what ia64
 does?


HP9000 also has quad precision:
http://docs.hp.com/en/B3906-90006/ch02s02.html

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Solaris Sparc build broken

2009-11-04 Thread Charles R Harris
On Wed, Nov 4, 2009 at 12:38 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:



 On Wed, Nov 4, 2009 at 12:35 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 On Wed, Nov 4, 2009 at 12:11 PM, David Cournapeau courn...@gmail.comwrote:

 On Thu, Nov 5, 2009 at 2:15 AM, Michael Droettboom md...@stsci.edu
 wrote:
  I'm getting the following from r7603 on Solaris Sparc -- somehow
 related
  to not having a long double version of next after available.  I realise
  not everyone has access to (or is dependent on) this platform, so I'm
  willing to help in whatever way I can, I'm just not sure I understand
  the change yet.

 The only way to implement nextafter that I know of requires to know
 the exact representation of the floating point number, and long double
 is unfortunately platform dependent.

 What is the long double format on solaris sparc ? (big endian I
 suppose, but how many bits for the mantissa and  exponent ? Does it
 follow IEER754 ?)


 Long double on SPARC is quad precision, which I believe *is* in one of the
 ieee specs. In any case, it has a lot more precision than the extended
 precision found in Intel derived architectures. Hmm, I wonder what ia64
 does?


 HP9000 also has quad precision:
 http://docs.hp.com/en/B3906-90006/ch02s02.html


And it looks like extended precision has disappeared from the latest version
or the ieee_754-2008 http://wapedia.mobi/en/IEEE_754-2008standard, being
replaced by quad precision. I know Intel is also working on quad precision
FPU's, so I think that is where things are heading.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Solaris Sparc build broken

2009-11-04 Thread Michael Droettboom
David Cournapeau wrote:
 On Thu, Nov 5, 2009 at 2:15 AM, Michael Droettboom md...@stsci.edu wrote:
   
 I'm getting the following from r7603 on Solaris Sparc -- somehow related
 to not having a long double version of next after available.  I realise
 not everyone has access to (or is dependent on) this platform, so I'm
 willing to help in whatever way I can, I'm just not sure I understand
 the change yet.
 

 The only way to implement nextafter that I know of requires to know
 the exact representation of the floating point number, and long double
 is unfortunately platform dependent.

 What is the long double format on solaris sparc ? (big endian I
 suppose, but how many bits for the mantissa and  exponent ? Does it
 follow IEER754 ?)
   
I honestly don't know -- I've never had to use them.  It would be great 
to solve this properly but it's difficult to find definitive information 
about these things.

Assuming we can't solve this the right way before the next release, 
would it be possible for this to raise a runtime NotImplemented error 
(by not defining the LONGDOUBLE_nextafter ufunc) rather than raising a 
compiler error which prevents the build from completing?

Mike

-- 
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] converting discrete data to unique integers

2009-11-04 Thread David Warde-Farley
Hi,

Suppose I have an array 'd'

In [75]: d
Out[75]:
array(['parrot', 'parrot', 'dog', 'cat', 'parrot', 'dog', 'parrot',  
'cat',
'dog', 'dog', 'dog', 'cat', 'cat', 'dog', 'cat', 'parrot',  
'cat',
'cat', 'dog', 'parrot', 'parrot', 'parrot', 'cat', 'dog',  
'parrot',
'dog', 'dog', 'dog', 'dog', 'parrot', 'parrot', 'cat', 'dog',
'parrot', 'cat', 'parrot', 'cat', 'dog', 'parrot', 'cat',  
'parrot',
'cat', 'parrot', 'parrot', 'parrot', 'parrot', 'dog', 'cat',
'parrot', 'cat'],
   dtype='|S6')

I'd like to map every unique element (these could be strings, objects,  
or already ints) to a unique integer between 0 and len(unique(d)) - 1.

The solution I've come up with is

In [76]: uniqueind, vectorind = (d == unique(d)[:, newaxis]).nonzero()

In [77]: myints = uniqueind[argsort(vectorind)]

But I wonder if there's a better way to do this. Anyone ever run into  
this problem before?

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] converting discrete data to unique integers

2009-11-04 Thread Alan G Isaac
On 11/4/2009 3:09 PM, David Warde-Farley wrote:
 I'd like to map every unique element (these could be strings, objects,
 or already ints) to a unique integer between 0 and len(unique(d)) - 1.

mymap = dict((k,v) for v,k in enumerate(set(a)))

fwiw,
Alan Isaac

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] converting discrete data to unique integers

2009-11-04 Thread David Warde-Farley
On 4-Nov-09, at 3:09 PM, David Warde-Farley wrote:

 But I wonder if there's a better way to do this. Anyone ever run into
 this problem before?

Obviously I find the answer right after I hit send. unique(d,  
return_inverse=True).

Sorry for the noise.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] converting discrete data to unique integers

2009-11-04 Thread Robert Kern
On Wed, Nov 4, 2009 at 14:21, Alan G Isaac ais...@american.edu wrote:
 On 11/4/2009 3:09 PM, David Warde-Farley wrote:
 I'd like to map every unique element (these could be strings, objects,
 or already ints) to a unique integer between 0 and len(unique(d)) - 1.

 mymap = dict((k,v) for v,k in enumerate(set(a)))

I'd toss in a sorted() just to keep the mapping stable.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] converting discrete data to unique integers

2009-11-04 Thread David Warde-Farley
Thanks Alan and Robert, I probably should have mentioned that I was  
interested in obtaining the corresponding integer for each value in  
the array d, in which case the dictionary bit works but would require  
a further loop to expand.

On 4-Nov-09, at 3:22 PM, Robert Kern wrote:

 I'd toss in a sorted() just to keep the mapping stable.

Good point. With the return_inverse solution, is unique() guaranteed  
to give back the same array of unique values in the same (presumably  
sorted) order? That is, for two arrays A and B which have elements  
only drawn from a set S, is all(unique(A) == unique(B)) guaranteed?  
The code is a quite clever and a bit hard to follow, but it *looks*  
like it will provide a stable mapping since it's using a sort.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] converting discrete data to unique integers

2009-11-04 Thread josef . pktd
On Wed, Nov 4, 2009 at 3:57 PM, David Warde-Farley d...@cs.toronto.edu wrote:
 Thanks Alan and Robert, I probably should have mentioned that I was
 interested in obtaining the corresponding integer for each value in
 the array d, in which case the dictionary bit works but would require
 a further loop to expand.

 On 4-Nov-09, at 3:22 PM, Robert Kern wrote:

 I'd toss in a sorted() just to keep the mapping stable.

 Good point. With the return_inverse solution, is unique() guaranteed
 to give back the same array of unique values in the same (presumably
 sorted) order? That is, for two arrays A and B which have elements
 only drawn from a set S, is all(unique(A) == unique(B)) guaranteed?
 The code is a quite clever and a bit hard to follow, but it *looks*
 like it will provide a stable mapping since it's using a sort.

I looked at it some time ago, and from what I remember, the sort
is done if return_inverse=True but for some codepath it uses
set.

So, you would need to check which version includes the sort.

Josef


 David
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] converting discrete data to unique integers

2009-11-04 Thread Neil Crighton
 josef.pktd at gmail.com writes:

  Good point. With the return_inverse solution, is unique() guaranteed
  to give back the same array of unique values in the same (presumably
  sorted) order? That is, for two arrays A and B which have elements
  only drawn from a set S, is all(unique(A) == unique(B)) guaranteed?
  The code is a quite clever and a bit hard to follow, but it *looks*
  like it will provide a stable mapping since it's using a sort.
 
 I looked at it some time ago, and from what I remember, the sort
 is done if return_inverse=True but for some codepath it uses
 set.
 

unique always sorts, even if it uses set. So I'm pretty sure 
all(unique(A) == unique(B)) is guaranteed.


Neil

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] persistent ImportError: No module named multiarray when moving cPickle files between machines

2009-11-04 Thread Bruce Southey
On Wed, Nov 4, 2009 at 8:06 AM, Reckoner recko...@gmail.com wrote:
 Here's an example:

 On winxp 64-bit:

 Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] 
 on
 win32
 Type help, copyright, credits or license for more information.
 import numpy
 import cPickle
 a = numpy.eye(10)
 cPickle.dump(a,open('from32bitxp.pkl','w'))
 import numpy.core.multiarray
 numpy.__version__
 '1.0.4'


 On linux 64 bit:

 Python 2.5.4 (r254:67916, Feb  5 2009, 19:52:35)
 [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2
 Type help, copyright, credits or license for more information.
 import numpy
 import cPickle
 cPickle.load(open('from32bitxp.pkl'))
 Traceback (most recent call last):
  File stdin, line 1, in module
 ImportError: No module named multiarray
 numpy.__version__
 '1.2.1'
 import numpy.core.multiarray


 Note that I transfer the from32bitxp file from the winxp32 machine to
 the linux host. Also, I've tried this with version 1.3 on winxp and
 get the same problem on the linux host.

 Here's more interesting info:

 On linux:

 a = numpy.eye(10)
 cPickle.dump(a,open('from64bitLinux.pkl','w'))

 upon transferring the file to winxp 32 and on winxp32:

 cPickle.load(open('from64bitLinux.pkl'))

 See? No problem going from linux to winxp32; but problems going the other way.

 Please let me know if you need more info on this.

 Any help appreciated.

 On Tue, Nov 3, 2009 at 4:55 AM, Bruce Southey bsout...@gmail.com wrote:
 On Mon, Nov 2, 2009 at 6:31 PM, Reckoner recko...@gmail.com wrote:
 thanks for the suggestion! I will look into it. The other thing is
 that the numpy arrays in question are actually embedded in another
 object. When I convert the numpy arrays into plain lists, and then
 cPickle them, there is no problem with any of the larger objects. That
 is the way we are currently working around this issue.

 Thanks again.

 On Mon, Nov 2, 2009 at 2:43 PM, Bruce Southey bsout...@gmail.com wrote:
 On Mon, Nov 2, 2009 at 2:42 PM, Reckoner recko...@gmail.com wrote:
 Anybody have any ideas here?

 Otherwise, I'm thinking this should be posted to the numpy bugs list.
 What's the best way to report a bug of this kind?

 Thanks!

 On Fri, Oct 30, 2009 at 5:48 PM, Reckoner recko...@gmail.com wrote:
 Robert Kern wrote:
 You can import numpy.core.multiarray on both machines?

 Yes. For each machine separately, you can cPickle files with numpy
 arrays without problems loading/dumping. The problem comes from
 transferring the win32 cPickle'd files to Linux 64 bit and then trying
 to load them. Transferring cPickle'd files that do *not* have numpy
 arrays work as expected. In other words, cPICKLE'd lists transfer fine
 back and forth between the two machines. In fact, we currently get
 around this problem by converting the numpy arrays to lists,
 transferring them, and then re-numpy-ing them on the respective hosts

 thanks.


 On Fri, Oct 30, 2009 at 11:13 AM, Reckoner recko...@gmail.com wrote:
 Hi,

 % python -c 'import numpy.core.multiarray'

 works just fine, but when I try to load a file that I have transferred
 from another machine running Windows to one running Linux, I get:

 %  python -c 'import cPickle;a=cPickle.load(open(matrices.pkl))'

 Traceback (most recent call last):
  File string, line 1, in module
 ImportError: No module named multiarray

 otherwise, cPickle works normally when transferring files that *do*
 not contain numpy arrays.

 I am using version 1.2 on both machines. It's not so easy for me to
 change versions, by the way, since this is the version that my working
 group has decided on to standardize on for this effort.


 Any help appreciated.


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 Have you have tried the other Cookbook approaches:
 http://www.scipy.org/Cookbook/InputOutput
 Like using numpy's own array io functions - load/save(z)?
 (seems to work between 64-bit Windows 7 and 64-bit Linux - each has
 different numpy versions)

 Bruce
 ___

 Can you provide you provide a small self-contained example of the
 problem including object creation especially as your example does not
 import numpy?

 Really you have to start at the beginning (like pickling and
 transferring numpy arrays) and then increase the complexity to include
 the object.


 Bruce
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


Hi,
I did not see the file 'from32bitxp.pkl'. It would really help if you
can provide the full example that includes the creation of the
complete object so at least people could try doing the same process
with the object that you 

Re: [Numpy-discussion] Solaris Sparc build broken

2009-11-04 Thread David Cournapeau
On Thu, Nov 5, 2009 at 4:55 AM, Michael Droettboom md...@stsci.edu wrote:
 David Cournapeau wrote:
 On Thu, Nov 5, 2009 at 2:15 AM, Michael Droettboom md...@stsci.edu wrote:

 I'm getting the following from r7603 on Solaris Sparc -- somehow related
 to not having a long double version of next after available.  I realise
 not everyone has access to (or is dependent on) this platform, so I'm
 willing to help in whatever way I can, I'm just not sure I understand
 the change yet.


 The only way to implement nextafter that I know of requires to know
 the exact representation of the floating point number, and long double
 is unfortunately platform dependent.

 What is the long double format on solaris sparc ? (big endian I
 suppose, but how many bits for the mantissa and  exponent ? Does it
 follow IEER754 ?)

 I honestly don't know -- I've never had to use them.  It would be great
 to solve this properly but it's difficult to find definitive information
 about these things.

 Assuming we can't solve this the right way before the next release,
 would it be possible for this to raise a runtime NotImplemented error
 (by not defining the LONGDOUBLE_nextafter ufunc) rather than raising a
 compiler error which prevents the build from completing?

To be honest, I thought this condition would never arise (I am quite
surprised that solaris does not have nextafter - both BSD and GNU libc
use the Sun implementation for this function).

If this is not fixed within the code freeze (16th november), the
feature will be removed altogether for 1.4.0. I don't want to go down
the road of different feature set for different platforms.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] persistent ImportError: No module named multiarray when moving cPickle files between machines

2009-11-04 Thread Reckoner
Bruce :

The file in question was created as shown in the prior e-mail. Here it is again:

 cPickle.dump(a,open('from32bitxp.pkl','w'))


Thanks!


On Wed, Nov 4, 2009 at 3:56 PM, Bruce Southey bsout...@gmail.com wrote:
 On Wed, Nov 4, 2009 at 8:06 AM, Reckoner recko...@gmail.com wrote:
 Here's an example:

 On winxp 64-bit:

 Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] 
 on
 win32
 Type help, copyright, credits or license for more information.
 import numpy
 import cPickle
 a = numpy.eye(10)
 cPickle.dump(a,open('from32bitxp.pkl','w'))
 import numpy.core.multiarray
 numpy.__version__
 '1.0.4'


 On linux 64 bit:

 Python 2.5.4 (r254:67916, Feb  5 2009, 19:52:35)
 [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2
 Type help, copyright, credits or license for more information.
 import numpy
 import cPickle
 cPickle.load(open('from32bitxp.pkl'))
 Traceback (most recent call last):
  File stdin, line 1, in module
 ImportError: No module named multiarray
 numpy.__version__
 '1.2.1'
 import numpy.core.multiarray


 Note that I transfer the from32bitxp file from the winxp32 machine to
 the linux host. Also, I've tried this with version 1.3 on winxp and
 get the same problem on the linux host.

 Here's more interesting info:

 On linux:

 a = numpy.eye(10)
 cPickle.dump(a,open('from64bitLinux.pkl','w'))

 upon transferring the file to winxp 32 and on winxp32:

 cPickle.load(open('from64bitLinux.pkl'))

 See? No problem going from linux to winxp32; but problems going the other 
 way.

 Please let me know if you need more info on this.

 Any help appreciated.

 On Tue, Nov 3, 2009 at 4:55 AM, Bruce Southey bsout...@gmail.com wrote:
 On Mon, Nov 2, 2009 at 6:31 PM, Reckoner recko...@gmail.com wrote:
 thanks for the suggestion! I will look into it. The other thing is
 that the numpy arrays in question are actually embedded in another
 object. When I convert the numpy arrays into plain lists, and then
 cPickle them, there is no problem with any of the larger objects. That
 is the way we are currently working around this issue.

 Thanks again.

 On Mon, Nov 2, 2009 at 2:43 PM, Bruce Southey bsout...@gmail.com wrote:
 On Mon, Nov 2, 2009 at 2:42 PM, Reckoner recko...@gmail.com wrote:
 Anybody have any ideas here?

 Otherwise, I'm thinking this should be posted to the numpy bugs list.
 What's the best way to report a bug of this kind?

 Thanks!

 On Fri, Oct 30, 2009 at 5:48 PM, Reckoner recko...@gmail.com wrote:
 Robert Kern wrote:
 You can import numpy.core.multiarray on both machines?

 Yes. For each machine separately, you can cPickle files with numpy
 arrays without problems loading/dumping. The problem comes from
 transferring the win32 cPickle'd files to Linux 64 bit and then trying
 to load them. Transferring cPickle'd files that do *not* have numpy
 arrays work as expected. In other words, cPICKLE'd lists transfer fine
 back and forth between the two machines. In fact, we currently get
 around this problem by converting the numpy arrays to lists,
 transferring them, and then re-numpy-ing them on the respective hosts

 thanks.


 On Fri, Oct 30, 2009 at 11:13 AM, Reckoner recko...@gmail.com wrote:
 Hi,

 % python -c 'import numpy.core.multiarray'

 works just fine, but when I try to load a file that I have transferred
 from another machine running Windows to one running Linux, I get:

 %  python -c 'import cPickle;a=cPickle.load(open(matrices.pkl))'

 Traceback (most recent call last):
  File string, line 1, in module
 ImportError: No module named multiarray

 otherwise, cPickle works normally when transferring files that *do*
 not contain numpy arrays.

 I am using version 1.2 on both machines. It's not so easy for me to
 change versions, by the way, since this is the version that my working
 group has decided on to standardize on for this effort.


 Any help appreciated.


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 Have you have tried the other Cookbook approaches:
 http://www.scipy.org/Cookbook/InputOutput
 Like using numpy's own array io functions - load/save(z)?
 (seems to work between 64-bit Windows 7 and 64-bit Linux - each has
 different numpy versions)

 Bruce
 ___

 Can you provide you provide a small self-contained example of the
 problem including object creation especially as your example does not
 import numpy?

 Really you have to start at the beginning (like pickling and
 transferring numpy arrays) and then increase the complexity to include
 the object.


 Bruce
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 Hi,
 I 

Re: [Numpy-discussion] persistent ImportError: No module named multiarray when moving cPickle files between machines

2009-11-04 Thread Reckoner
FYI, I uploaded the two files in question to the numpy ticket

http://projects.scipy.org/numpy/ticket/1284

Thanks!


On Wed, Nov 4, 2009 at 3:56 PM, Bruce Southey bsout...@gmail.com wrote:
 On Wed, Nov 4, 2009 at 8:06 AM, Reckoner recko...@gmail.com wrote:
 Here's an example:

 On winxp 64-bit:

 Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] 
 on
 win32
 Type help, copyright, credits or license for more information.
 import numpy
 import cPickle
 a = numpy.eye(10)
 cPickle.dump(a,open('from32bitxp.pkl','w'))
 import numpy.core.multiarray
 numpy.__version__
 '1.0.4'


 On linux 64 bit:

 Python 2.5.4 (r254:67916, Feb  5 2009, 19:52:35)
 [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2
 Type help, copyright, credits or license for more information.
 import numpy
 import cPickle
 cPickle.load(open('from32bitxp.pkl'))
 Traceback (most recent call last):
  File stdin, line 1, in module
 ImportError: No module named multiarray
 numpy.__version__
 '1.2.1'
 import numpy.core.multiarray


 Note that I transfer the from32bitxp file from the winxp32 machine to
 the linux host. Also, I've tried this with version 1.3 on winxp and
 get the same problem on the linux host.

 Here's more interesting info:

 On linux:

 a = numpy.eye(10)
 cPickle.dump(a,open('from64bitLinux.pkl','w'))

 upon transferring the file to winxp 32 and on winxp32:

 cPickle.load(open('from64bitLinux.pkl'))

 See? No problem going from linux to winxp32; but problems going the other 
 way.

 Please let me know if you need more info on this.

 Any help appreciated.

 On Tue, Nov 3, 2009 at 4:55 AM, Bruce Southey bsout...@gmail.com wrote:
 On Mon, Nov 2, 2009 at 6:31 PM, Reckoner recko...@gmail.com wrote:
 thanks for the suggestion! I will look into it. The other thing is
 that the numpy arrays in question are actually embedded in another
 object. When I convert the numpy arrays into plain lists, and then
 cPickle them, there is no problem with any of the larger objects. That
 is the way we are currently working around this issue.

 Thanks again.

 On Mon, Nov 2, 2009 at 2:43 PM, Bruce Southey bsout...@gmail.com wrote:
 On Mon, Nov 2, 2009 at 2:42 PM, Reckoner recko...@gmail.com wrote:
 Anybody have any ideas here?

 Otherwise, I'm thinking this should be posted to the numpy bugs list.
 What's the best way to report a bug of this kind?

 Thanks!

 On Fri, Oct 30, 2009 at 5:48 PM, Reckoner recko...@gmail.com wrote:
 Robert Kern wrote:
 You can import numpy.core.multiarray on both machines?

 Yes. For each machine separately, you can cPickle files with numpy
 arrays without problems loading/dumping. The problem comes from
 transferring the win32 cPickle'd files to Linux 64 bit and then trying
 to load them. Transferring cPickle'd files that do *not* have numpy
 arrays work as expected. In other words, cPICKLE'd lists transfer fine
 back and forth between the two machines. In fact, we currently get
 around this problem by converting the numpy arrays to lists,
 transferring them, and then re-numpy-ing them on the respective hosts

 thanks.


 On Fri, Oct 30, 2009 at 11:13 AM, Reckoner recko...@gmail.com wrote:
 Hi,

 % python -c 'import numpy.core.multiarray'

 works just fine, but when I try to load a file that I have transferred
 from another machine running Windows to one running Linux, I get:

 %  python -c 'import cPickle;a=cPickle.load(open(matrices.pkl))'

 Traceback (most recent call last):
  File string, line 1, in module
 ImportError: No module named multiarray

 otherwise, cPickle works normally when transferring files that *do*
 not contain numpy arrays.

 I am using version 1.2 on both machines. It's not so easy for me to
 change versions, by the way, since this is the version that my working
 group has decided on to standardize on for this effort.


 Any help appreciated.


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 Have you have tried the other Cookbook approaches:
 http://www.scipy.org/Cookbook/InputOutput
 Like using numpy's own array io functions - load/save(z)?
 (seems to work between 64-bit Windows 7 and 64-bit Linux - each has
 different numpy versions)

 Bruce
 ___

 Can you provide you provide a small self-contained example of the
 problem including object creation especially as your example does not
 import numpy?

 Really you have to start at the beginning (like pickling and
 transferring numpy arrays) and then increase the complexity to include
 the object.


 Bruce
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 Hi,
 I did not see the file