Re: [Numpy-discussion] What is a numpy.long type?

2013-08-23 Thread David Cournapeau
On Fri, Aug 23, 2013 at 6:02 AM, Chris Barker - NOAA Federal 
chris.bar...@noaa.gov wrote:

 Hi folks,

 I had thought that maybe a numpy.long dtype was a system
 (compiler)-native C long.

 But on both 32 and 64 bit python on OS-X, it seems to be 64 bit. I'm
 pretty sure that on OS-X 32 bit, a C long is 32 bits. (gdd, of course)

 I don't have other machines to test on , but as the MS compilers and
 gcc do different things with a C long on 64 bit platforms, Im curious
 how numpy defines it.


 In general, I prefer the explicit is better than implicit approach
 and use, e.g. int32 and int64. However, in this case, we are using
 Cython, and calling C code that used long -- so what I want is
 whatever the compiler thinks is a long -- is there  a way to do that
 without my own kludgy platform-dependent code?

 I note that the Cython numpy.pxd assigns:

 ctypedef signed long  npy_long

 Is that a bug? (or maybe npy_long is not supposed to match np.long


npy_long is indeed just an alias to C long, np.long is an alias to python's
long:

arch -32 python -c import numpy as np; print np.dtype(np.int); print
np.dtype(np.long)
int32
int64

arch -64 python -c import numpy as np; print np.dtype(np.int); print
np.dtype(np.long)
int64
int64

and python -c import numpy as np; print np.long is long will print True

All this is on python 2.7, I am not sure how/if that changes on python 3
(that consolidated python int/long).

David


 -Chris




 --

 Christopher Barker, Ph.D.
 Oceanographer

 Emergency Response Division
 NOAA/NOS/ORR(206) 526-6959   voice
 7600 Sand Point Way NE   (206) 526-6329   fax
 Seattle, WA  98115   (206) 526-6317   main reception

 chris.bar...@noaa.gov
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Warnings not raised by np.log in 32 bit build on Windows

2013-08-23 Thread Charles R Harris
These things may depend on how the compiler implements various calls. Some
errors went the other way with Julian's SIMD work, i.e., errors getting set
that were not set before. I'm not sure what can be done about it.


On Thu, Aug 22, 2013 at 8:32 PM, Warren Weckesser 
warren.weckes...@gmail.com wrote:

 I'm investigating a test error in scipy 0.13.0 beta 1 that was
 reported by Christoph Gohlke.  The scipy issue is here:
 https://github.com/scipy/scipy/issues/2771

 I don't have a Windows environment to test it myself, but Christoph
 reported that this code:

 ```
 import numpy as np

 data = np.array([-0.375, -0.25, 0.0])
 s = np.log(data)
 ```

 does not generate two RuntimeWarnings when it is run with numpy 1.7.1
 in a 32 bit Windows 8 environment (numpy 1.7.1 compiled with Visual
 Studio compilers and Intel's MKL).  In 64 bit Windows, and in 64 bit
 linux, it generates two RuntimeWarnings.

 The inconsistency seems like a bug, possibly this one:
 https://github.com/numpy/numpy/issues/1958.

 Can anyone check if this also occurs in the development branch?

 Warren
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Warnings not raised by np.log in 32 bit build on Windows

2013-08-23 Thread Alan G Isaac
On 8/22/2013 10:32 PM, Warren Weckesser wrote:
 Christoph
 reported that this code:

 ```
 import numpy as np

 data = np.array([-0.375, -0.25, 0.0])
 s = np.log(data)
 ```

 does not generate two RuntimeWarnings when it is run with numpy 1.7.1
 in a 32 bit Windows 8 environment (numpy 1.7.1 compiled with Visual
 Studio compilers and Intel's MKL).


Not sure if you want other (not Win 8) reports related to this,
but ... I'm appending (no runtime errors) below.  The OS is
Win 7 (64bit).

Alan Isaac


Enthought Python Distribution -- www.enthought.com
Version: 7.3-2 (32-bit)

Python 2.7.3 |EPD 7.3-2 (32-bit)| (default, Apr 12 2012, 14:30:37) [MSC v.1500 
3 2 bit (Intel)] on win32
Type credits, demo or enthought for more information.
  import numpy as np
  np.__version__
'1.7.1'
  data = np.array([-0.375, -0.25, 0.0])
  s = np.log(data)
 


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Warnings not raised by np.log in 32 bit build on Windows

2013-08-23 Thread Nathaniel Smith
Probably the thing to do for reliable behaviour is to decide on the
behaviour we want and then implement it by hand. I.e., either clear the
FP flags inside the ufunc loop (if we decide that log shouldn't raise a
warning), or else check for nan and set the invalid flag ourselves.
(Checking for nan should be much cheaper than computing log, I think, so
this should be okay speed-wise?)
On 23 Aug 2013 13:16, Charles R Harris charlesr.har...@gmail.com wrote:

 These things may depend on how the compiler implements various calls. Some
 errors went the other way with Julian's SIMD work, i.e., errors getting set
 that were not set before. I'm not sure what can be done about it.


 On Thu, Aug 22, 2013 at 8:32 PM, Warren Weckesser 
 warren.weckes...@gmail.com wrote:

 I'm investigating a test error in scipy 0.13.0 beta 1 that was
 reported by Christoph Gohlke.  The scipy issue is here:
 https://github.com/scipy/scipy/issues/2771

 I don't have a Windows environment to test it myself, but Christoph
 reported that this code:

 ```
 import numpy as np

 data = np.array([-0.375, -0.25, 0.0])
 s = np.log(data)
 ```

 does not generate two RuntimeWarnings when it is run with numpy 1.7.1
 in a 32 bit Windows 8 environment (numpy 1.7.1 compiled with Visual
 Studio compilers and Intel's MKL).  In 64 bit Windows, and in 64 bit
 linux, it generates two RuntimeWarnings.

 The inconsistency seems like a bug, possibly this one:
 https://github.com/numpy/numpy/issues/1958.

 Can anyone check if this also occurs in the development branch?

 Warren
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Warnings not raised by np.log in 32 bit build on Windows

2013-08-23 Thread Charles R Harris
On Fri, Aug 23, 2013 at 6:29 AM, Nathaniel Smith n...@pobox.com wrote:

 Probably the thing to do for reliable behaviour is to decide on the
 behaviour we want and then implement it by hand. I.e., either clear the
 FP flags inside the ufunc loop (if we decide that log shouldn't raise a
 warning), or else check for nan and set the invalid flag ourselves.
 (Checking for nan should be much cheaper than computing log, I think, so
 this should be okay speed-wise?)
 On 23 Aug 2013 13:16, Charles R Harris charlesr.har...@gmail.com
 wrote:

 These things may depend on how the compiler implements various calls.
 Some errors went the other way with Julian's SIMD work, i.e., errors
 getting set that were not set before. I'm not sure what can be done about
 it.


 On Thu, Aug 22, 2013 at 8:32 PM, Warren Weckesser 
 warren.weckes...@gmail.com wrote:

 I'm investigating a test error in scipy 0.13.0 beta 1 that was
 reported by Christoph Gohlke.  The scipy issue is here:
 https://github.com/scipy/scipy/issues/2771

 I don't have a Windows environment to test it myself, but Christoph
 reported that this code:

 ```
 import numpy as np

 data = np.array([-0.375, -0.25, 0.0])
 s = np.log(data)
 ```

 does not generate two RuntimeWarnings when it is run with numpy 1.7.1
 in a 32 bit Windows 8 environment (numpy 1.7.1 compiled with Visual
 Studio compilers and Intel's MKL).  In 64 bit Windows, and in 64 bit
 linux, it generates two RuntimeWarnings.

 The inconsistency seems like a bug, possibly this one:
 https://github.com/numpy/numpy/issues/1958.

 Can anyone check if this also occurs in the development branch?


ISTR that is can also depend on instruction reordering in the hardware.
There was a bug opened on gcc for this a couple of years ago, but I did not
have the impression that it was going to get fixed any time soon.
Explicitly checking may add noticeable overhead and probably needs to be
profiled if we go that way.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] RAM problem during code execution - Numpya arrays

2013-08-23 Thread Francesc Alted
Hi José,

The code is somewhat longish for a pure visual inspection, but my advice is
that you install memory profiler (
https://pypi.python.org/pypi/memory_profiler).  This will help you
determine which line or lines are hugging the memory the most.

Saludos,
Francesc

On Fri, Aug 23, 2013 at 3:58 PM, Josè Luis Mietta 
joseluismie...@yahoo.com.ar wrote:

 Hi ecperts. I need your help with a RAM porblem during execution of my
 script.
 I wrote the next code. I use SAGE. In 1-2 hours of execution time the RAM
 of my laptop (8gb) is filled and the sistem crash:

 from scipy.stats import uniformimport numpy as np

 cant_de_cadenas =[700,800,900]

 cantidad_de_cadenas=np.array([])
 for k in cant_de_cadenas:
 cantidad_de_cadenas=np.append(cantidad_de_cadenas,k)

 cantidad_de_cadenas=np.transpose(cantidad_de_cadenas)

 b=10
 h=bLongitud=1
 numero_experimentos=150

 densidad_de_cadenas =cantidad_de_cadenas/(b**2)

 prob_perc=np.array([])

 tiempos=np.array([])

 S_int=np.array([])

 S_medio=np.array([])

 desviacion_standard=np.array([])

 desviacion_standard_nuevo=np.array([])

 anisotropia_macroscopica_porcentual=np.array([])

 componente_y=np.array([])

 componente_x=np.array([])
 import time
 for N in cant_de_cadenas:

 empieza=time.clock()

 PERCOLACION=np.array([])

 size_medio_intuitivo = np.array([])
 size_medio_nuevo = np.array([])
 std_dev_size_medio_intuitivo = np.array([])
 std_dev_size_medio_nuevo = np.array([])
 comp_y = np.array([])
 comp_x = np.array([])


 for u in xrange(numero_experimentos):


 perco = False

 array_x1=uniform.rvs(loc=-b/2, scale=b, size=N)
 array_y1=uniform.rvs(loc=-h/2, scale=h, size=N)
 array_angle=uniform.rvs(loc=-0.5*(np.pi), scale=np.pi, size=N)


 array_pendiente_x=1./np.tan(array_angle)

 random=uniform.rvs(loc=-1, scale=2, size=N)
 lambda_sign=np.zeros([N])
 for t in xrange(N):
 if random[t]0:
 lambda_sign[t]=-1
 else:
 lambda_sign[t]=1
 array_lambdas=(lambda_sign*Longitud)/np.sqrt(1+array_pendiente_x**2)


 array_x2= array_x1 + array_lambdas*array_pendiente_x
 array_y2= array_y1 + array_lambdas*1

 array_x1 = np.append(array_x1, [-b/2, b/2, -b/2, -b/2])
 array_y1 = np.append(array_y1, [-h/2, -h/2, -h/2, h/2])
 array_x2 = np.append(array_x2, [-b/2, b/2, b/2, b/2])
 array_y2 = np.append(array_y2, [h/2, h/2, -h/2, h/2])

 M = np.zeros([N+4,N+4])

 for j in xrange(N+4):
 if j0:
 x_A1B1 = array_x2[j]-array_x1[j]
 y_A1B1 = array_y2[j]-array_y1[j]
 x_A1A2 = array_x1[0:j]-array_x1[j]
 y_A1A2 = array_y1[0:j]-array_y1[j]
 x_A2A1 = -1*x_A1A2
 y_A2A1 = -1*y_A1A2
 x_A2B2 = array_x2[0:j]-array_x1[0:j]
 y_A2B2 = array_y2[0:j]-array_y1[0:j]
 x_A1B2 = array_x2[0:j]-array_x1[j]
 y_A1B2 = array_y2[0:j]-array_y1[j]
 x_A2B1 = array_x2[j]-array_x1[0:j]
 y_A2B1 = array_y2[j]-array_y1[0:j]

 p1 = x_A1B1*y_A1A2 - y_A1B1*x_A1A2
 p2 = x_A1B1*y_A1B2 - y_A1B1*x_A1B2
 p3 = x_A2B2*y_A2B1 - y_A2B2*x_A2B1
 p4 = x_A2B2*y_A2A1 - y_A2B2*x_A2A1

 condicion_1=p1*p2
 condicion_2=p3*p4

 for k in xrange (j):
 if condicion_1[k]=0 and condicion_2[k]=0:
 M[j,k]=1
 del condicion_1
 del condicion_2


 if j+1N+4:
 x_A1B1 = array_x2[j]-array_x1[j]
 y_A1B1 = array_y2[j]-array_y1[j]
 x_A1A2 = array_x1[j+1:]-array_x1[j]
 y_A1A2 = array_y1[j+1:]-array_y1[j]
 x_A2A1 = -1*x_A1A2
 y_A2A1 = -1*y_A1A2
 x_A2B2 = array_x2[j+1:]-array_x1[j+1:]
 y_A2B2 = array_y2[j+1:]-array_y1[j+1:]
 x_A1B2 = array_x2[j+1:]-array_x1[j]
 y_A1B2 = array_y2[j+1:]-array_y1[j]
 x_A2B1 = array_x2[j]-array_x1[j+1:]
 y_A2B1 = array_y2[j]-array_y1[j+1:]

 p1 = x_A1B1*y_A1A2 - y_A1B1*x_A1A2
 p2 = x_A1B1*y_A1B2 - y_A1B1*x_A1B2
 p3 = x_A2B2*y_A2B1 - y_A2B2*x_A2B1
 p4 = x_A2B2*y_A2A1 - y_A2B2*x_A2A1

 condicion_1=p1*p2
 condicion_2=p3*p4

 for k in xrange ((N+4)-j-1):
 if condicion_1[k]=0 and condicion_2[k]=0:
 M[j,k+j+1]=1
 del condicion_1
 del condicion_2

 M[N,N+2]=0
 M[N,N+3]=0
 M[N+1,N+2]=0
 M[N+1,N+3]=0
 M[N+2,N]=0
 M[N+2,N+1]=0
 M[N+3,N]=0
 M[N+3,N+1]=0


 

Re: [Numpy-discussion] RAM problem during code execution - Numpya arrays

2013-08-23 Thread Benjamin Root
On Fri, Aug 23, 2013 at 10:34 AM, Francesc Alted franc...@continuum.iowrote:

 Hi José,

 The code is somewhat longish for a pure visual inspection, but my advice
 is that you install memory profiler (
 https://pypi.python.org/pypi/memory_profiler).  This will help you
 determine which line or lines are hugging the memory the most.

 Saludos,
 Francesc

 On Fri, Aug 23, 2013 at 3:58 PM, Josè Luis Mietta 
 joseluismie...@yahoo.com.ar wrote:

 Hi ecperts. I need your help with a RAM porblem during execution of my
 script.
 I wrote the next code. I use SAGE. In 1-2 hours of execution time the RAM
 of my laptop (8gb) is filled and the sistem crash:

 from scipy.stats import uniformimport numpy as np

 cant_de_cadenas =[700,800,900]

 cantidad_de_cadenas=np.array([])
 for k in cant_de_cadenas:
 cantidad_de_cadenas=np.append(cantidad_de_cadenas,k)

 cantidad_de_cadenas=np.transpose(cantidad_de_cadenas)

 b=10
 h=bLongitud=1
 numero_experimentos=150

 densidad_de_cadenas =cantidad_de_cadenas/(b**2)

 prob_perc=np.array([])

 tiempos=np.array([])

 S_int=np.array([])

 S_medio=np.array([])

 desviacion_standard=np.array([])

 desviacion_standard_nuevo=np.array([])

 anisotropia_macroscopica_porcentual=np.array([])

 componente_y=np.array([])

 componente_x=np.array([])
 import time
 for N in cant_de_cadenas:

 empieza=time.clock()

 PERCOLACION=np.array([])

 size_medio_intuitivo = np.array([])
 size_medio_nuevo = np.array([])
 std_dev_size_medio_intuitivo = np.array([])
 std_dev_size_medio_nuevo = np.array([])
 comp_y = np.array([])
 comp_x = np.array([])


 for u in xrange(numero_experimentos):


 perco = False

 array_x1=uniform.rvs(loc=-b/2, scale=b, size=N)
 array_y1=uniform.rvs(loc=-h/2, scale=h, size=N)
 array_angle=uniform.rvs(loc=-0.5*(np.pi), scale=np.pi, size=N)


 array_pendiente_x=1./np.tan(array_angle)

 random=uniform.rvs(loc=-1, scale=2, size=N)
 lambda_sign=np.zeros([N])
 for t in xrange(N):
 if random[t]0:
 lambda_sign[t]=-1
 else:
 lambda_sign[t]=1
 array_lambdas=(lambda_sign*Longitud)/np.sqrt(1+array_pendiente_x**2)


 array_x2= array_x1 + array_lambdas*array_pendiente_x
 array_y2= array_y1 + array_lambdas*1

 array_x1 = np.append(array_x1, [-b/2, b/2, -b/2, -b/2])
 array_y1 = np.append(array_y1, [-h/2, -h/2, -h/2, h/2])
 array_x2 = np.append(array_x2, [-b/2, b/2, b/2, b/2])
 array_y2 = np.append(array_y2, [h/2, h/2, -h/2, h/2])

 M = np.zeros([N+4,N+4])

 for j in xrange(N+4):
 if j0:
 x_A1B1 = array_x2[j]-array_x1[j]
 y_A1B1 = array_y2[j]-array_y1[j]
 x_A1A2 = array_x1[0:j]-array_x1[j]
 y_A1A2 = array_y1[0:j]-array_y1[j]
 x_A2A1 = -1*x_A1A2
 y_A2A1 = -1*y_A1A2
 x_A2B2 = array_x2[0:j]-array_x1[0:j]
 y_A2B2 = array_y2[0:j]-array_y1[0:j]
 x_A1B2 = array_x2[0:j]-array_x1[j]
 y_A1B2 = array_y2[0:j]-array_y1[j]
 x_A2B1 = array_x2[j]-array_x1[0:j]
 y_A2B1 = array_y2[j]-array_y1[0:j]

 p1 = x_A1B1*y_A1A2 - y_A1B1*x_A1A2
 p2 = x_A1B1*y_A1B2 - y_A1B1*x_A1B2
 p3 = x_A2B2*y_A2B1 - y_A2B2*x_A2B1
 p4 = x_A2B2*y_A2A1 - y_A2B2*x_A2A1

 condicion_1=p1*p2
 condicion_2=p3*p4

 for k in xrange (j):
 if condicion_1[k]=0 and condicion_2[k]=0:
 M[j,k]=1
 del condicion_1
 del condicion_2


 if j+1N+4:
 x_A1B1 = array_x2[j]-array_x1[j]
 y_A1B1 = array_y2[j]-array_y1[j]
 x_A1A2 = array_x1[j+1:]-array_x1[j]
 y_A1A2 = array_y1[j+1:]-array_y1[j]
 x_A2A1 = -1*x_A1A2
 y_A2A1 = -1*y_A1A2
 x_A2B2 = array_x2[j+1:]-array_x1[j+1:]
 y_A2B2 = array_y2[j+1:]-array_y1[j+1:]
 x_A1B2 = array_x2[j+1:]-array_x1[j]
 y_A1B2 = array_y2[j+1:]-array_y1[j]
 x_A2B1 = array_x2[j]-array_x1[j+1:]
 y_A2B1 = array_y2[j]-array_y1[j+1:]

 p1 = x_A1B1*y_A1A2 - y_A1B1*x_A1A2
 p2 = x_A1B1*y_A1B2 - y_A1B1*x_A1B2
 p3 = x_A2B2*y_A2B1 - y_A2B2*x_A2B1
 p4 = x_A2B2*y_A2A1 - y_A2B2*x_A2A1

 condicion_1=p1*p2
 condicion_2=p3*p4

 for k in xrange ((N+4)-j-1):
 if condicion_1[k]=0 and condicion_2[k]=0:
 M[j,k+j+1]=1
 del condicion_1
 del condicion_2

 M[N,N+2]=0
 M[N,N+3]=0
 M[N+1,N+2]=0
 M[N+1,N+3]=0
 M[N+2,N]=0
 

Re: [Numpy-discussion] What is a numpy.long type?

2013-08-23 Thread Chris Barker - NOAA Federal
On Aug 22, 2013, at 11:57 PM, David Cournapeau courn...@gmail.com wrote:

npy_long is indeed just an alias to C long,


Which means it's likely broken on 32 bit platforms and 64 bit MSVC.

np.long is an alias to python's long:

But python's long is an unlimited type--it can't be mapped to a c type at
all.

arch -32 python -c import numpy as np; print np.dtype(np.int); print
np.dtype(np.long)
int32
int64


So this is giving us a 64 bit int--not a bad compromise, but not a python
long--I've got to wonder why the alias is there at all.

arch -64 python -c import numpy as np; print np.dtype(np.int); print
np.dtype(np.long)
int64
int64

Same thing on 64 bit.

So while np.long is an alias to python long--it apparently is translated
internally as 64 bit -- everywhere?

So apparently there is no way to get a platform long. ( or, for that
matter, a platform anything else, it's just that there is more consistancy
among common platforms for the others)

-Chris


All this is on python 2.7, I am not sure how/if that changes on python 3
(that consolidated python int/long).

David


 -Chris




 --

 Christopher Barker, Ph.D.
 Oceanographer

 Emergency Response Division
 NOAA/NOS/ORR(206) 526-6959   voice
 7600 Sand Point Way NE   (206) 526-6329   fax
 Seattle, WA  98115   (206) 526-6317   main reception

 chris.bar...@noaa.gov
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What is a numpy.long type?

2013-08-23 Thread Sebastian Berg
On Fri, 2013-08-23 at 07:59 -0700, Chris Barker - NOAA Federal wrote:
 On Aug 22, 2013, at 11:57 PM, David Cournapeau courn...@gmail.com
 wrote:
 
 
snip
 
  arch -32 python -c import numpy as np; print np.dtype(np.int);
  print np.dtype(np.long)
  int32
  int64
 
 
 So this is giving us a 64 bit int--not a bad compromise, but not a
 python long--I've got to wonder why the alias is there at all.
 
It is there because you can't remove it :).
 
  arch -64 python -c import numpy as np; print np.dtype(np.int);
  print np.dtype(np.long)
  int64
  int64
  
  
 Same thing on 64 bit.
 
 
 So while np.long is an alias to python long--it apparently is
 translated internally as 64 bit -- everywhere?
 
Not sure how a python long is translated...
 
 So apparently there is no way to get a platform long. ( or, for that
 matter, a platform anything else, it's just that there is more
 consistancy among common platforms for the others)
 
An np.int_ is a platform long, since the python ints are C longs. It is
a bit weird naming, but it is transparent. Check
http://docs.scipy.org/doc/numpy-dev/reference/arrays.scalars.html#built-in-scalar-types

you got everything platform dependend there really. `intc` is an int,
`int_` is a long, and you got `longlong`, as well as `intp` which is an
ssize_t, etc.

- Sebastian

 
 -Chris
 
 
 
 
  All this is on python 2.7, I am not sure how/if that changes on
  python 3 (that consolidated python int/long).
  
  
  David
  
  -Chris
  
  
  
  
  --
  
  Christopher Barker, Ph.D.
  Oceanographer
  
  Emergency Response Division
  NOAA/NOS/ORR(206) 526-6959   voice
  7600 Sand Point Way NE   (206) 526-6329   fax
  Seattle, WA  98115   (206) 526-6317   main reception
  
  chris.bar...@noaa.gov
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
  
  
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
  
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What is a numpy.long type?

2013-08-23 Thread Charles R Harris
On Fri, Aug 23, 2013 at 8:59 AM, Chris Barker - NOAA Federal 
chris.bar...@noaa.gov wrote:

 On Aug 22, 2013, at 11:57 PM, David Cournapeau courn...@gmail.com wrote:

 npy_long is indeed just an alias to C long,


 Which means it's likely broken on 32 bit platforms and 64 bit MSVC.

 np.long is an alias to python's long:

 But python's long is an unlimited type--it can't be mapped to a c type at
 all.

 arch -32 python -c import numpy as np; print np.dtype(np.int); print
 np.dtype(np.long)
 int32
 int64


 So this is giving us a 64 bit int--not a bad compromise, but not a python
 long--I've got to wonder why the alias is there at all.

 arch -64 python -c import numpy as np; print np.dtype(np.int); print
 np.dtype(np.long)
 int64
 int64

 Same thing on 64 bit.

 So while np.long is an alias to python long--it apparently is translated
 internally as 64 bit -- everywhere?

 So apparently there is no way to get a platform long. ( or, for that
 matter, a platform anything else, it's just that there is more consistancy
 among common platforms for the others)


I use 'bBhHiIlLqQ' for the C types. Long varies between 32  64 bit,
depending on the platform and 64 bit convention chosen. The C int is always
32 bits as far as I know.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What is a numpy.long type?

2013-08-23 Thread Chris Barker - NOAA Federal
On Fri, Aug 23, 2013 at 8:11 AM, Sebastian Berg
sebast...@sipsolutions.net wrote:
 So this is giving us a 64 bit int--not a bad compromise, but not a
 python long--I've got to wonder why the alias is there at all.

 It is there because you can't remove it :).

sure we could -- not that we'd want to

 So while np.long is an alias to python long--it apparently is
 translated internally as 64 bit -- everywhere?

 Not sure how a python long is translated...

The big mystery!

 An np.int_ is a platform long, since the python ints are C longs. It is
 a bit weird naming, but it is transparent. Check
 http://docs.scipy.org/doc/numpy-dev/reference/arrays.scalars.html#built-in-scalar-types

cool, thanks.

 you got everything platform dependend there really. `intc` is an int,
 `int_` is a long, and you got `longlong`, as well as `intp` which is an
 ssize_t, etc.

great, thanks. That's helpful.

-Chris

-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What is a numpy.long type?

2013-08-23 Thread Chris Barker - NOAA Federal
On Fri, Aug 23, 2013 at 8:15 AM, Charles R Harris
charlesr.har...@gmail.com wrote:
 I use 'bBhHiIlLqQ' for the C types. Long varies between 32  64 bit,
 depending on the platform and 64 bit convention chosen. The C int is always
 32 bits as far as I know.

Well, not in the spec, but in practice, probably. Maybe not on some
embedded platfroms?

not my issue...

-Chris

-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] RAM problem during code execution - Numpya arrays

2013-08-23 Thread Daπid
On 23 August 2013 16:59, Benjamin Root ben.r...@ou.edu wrote:
 A lot of the code you have here can be greatly simplified. I would start
 with just trying to get rid of appends as much as possible and use
 preallocated arrays with np.empty() or np.ones() or the likes.

Also, if you don't know beforehand the final size of the array (I find
difficult to follow the flow of the program, it is quite lengthy), you
can use lists as a temporary thing:

results = []
while SomeCondition:
   do_something()
   results.append(res)
results = np.array(res)

Also, it may also help to trace things down encapsulate pieces of code
inside functions. There are two reasons for this: it will make the
code more readable, easier to test, and you could run independently
pieces of code to find where is your memory growing.


I hope it is of any help.

David.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X binaries for releases

2013-08-23 Thread Russell E. Owen
In article 
CAH6Pt5o32Otdhk2Ms5Cy5Zo=mn48h8x2wbswk92etub4mmr...@mail.gmail.com,
 Matthew Brett matthew.br...@gmail.com wrote:

 Hi,
 
 On Thu, Aug 22, 2013 at 12:14 PM, Russell E. Owen ro...@uw.edu wrote:
  In article
  cabl7cqjacxp2grtt8hvmayajrm0xmtn1qt71wkdnbgq7dlu...@mail.gmail.com,
   Ralf Gommers ralf.gomm...@gmail.com wrote:
 
  Hi all,
 
  Building binaries for releases is currently quite complex and
  time-consuming. For OS X we need two different machines, because we still
  provide binaries for OS X 10.5 and PPC machines. I propose to not do this
  anymore. It doesn't mean we completely drop support for 10.5 and PPC, just
  that we don't produce binaries. PPC was phased out in 2006 and OS X 10.6
  came out in 2009, so there can't be a lot of demand for it (and the
  download stats at
  http://sourceforge.net/projects/numpy/files/NumPy/1.7.1/confirm this).
 
  Furthermore I propose to not provide 2.6 binaries anymore. Downloads of 2.6
  OS X binaries were 5% of the 2.7 ones. We did the same with 2.4 for a long
  time - support it but no binaries.
 
  So what we'd have left at the moment is only the 64-bit/32-bit universal
  binary for 10.6 and up. What we finally need to add is 3.x OS X binaries.
  We can make an attempt to build these on 10.8 - since we have access to a
  hosted 10.8 Mac Mini it would allow all devs to easily do a release
  (leaving aside the Windows issue). If anyone has tried the 10.6 SDK on 10.8
  and knows if it actually works, that would be helpful.
 
  Any concerns, objections?
 
  I am in strong agreement.
 
  I'll be interested to learn how you make binary installers for python
  3.x because the standard version of bdist_mpkg will not do it. I have
  heard of two other projects (forks or variants of bdist_mpkg) that will,
  but I have no idea of either is supported.
 
 I think I'm the owner of one of the forks; I supporting it, but I
 should certainly make a release soon too.

That sounds promising. Can you suggest a non-released commit that is 
stable enough to try, or should we wait for a release?

Also, is there a way to combine multiple packages into one binary 
installer? (matplotib used to include python-dateutil, pytz and six, but 
1.3 does not).

  I have been able to building packages on 10.8 using
  MACOSX_DEPLOYMENT_TARGET=10.6 that will run on 10.6, so it will probably
  work. However I have run into several odd problems over the years
  building a binary installer on a newer system only to find it won't work
  on older systems for various reasons. Thus my personal recommendation is
  that you build on 10.6 if you want an installer that reliably works for
  10.6 and later. I keep an older computer around for this reason. In fact
  that is one good reason to drop support for ancient operating systems
  and PPC.
 
 I'm sitting next to a 10.6 machine you are welcome to use; just let me
 know, I'll give you login access.

Thank you. Personally I keep an older laptop I keep around that can run 
10.6 (and even 10.4 and 10.5, which was handy when I made binaries that 
supported 10.3.9 and later -- no need for that these days), so I don't 
need it, but somebody else working on matplotlib binaries might.

-- Russell

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X binaries for releases

2013-08-23 Thread Matthew Brett
Hi,

On Fri, Aug 23, 2013 at 1:32 PM, Russell E. Owen ro...@uw.edu wrote:
 In article
 CAH6Pt5o32Otdhk2Ms5Cy5Zo=mn48h8x2wbswk92etub4mmr...@mail.gmail.com,
  Matthew Brett matthew.br...@gmail.com wrote:

 Hi,

 On Thu, Aug 22, 2013 at 12:14 PM, Russell E. Owen ro...@uw.edu wrote:
  In article
  cabl7cqjacxp2grtt8hvmayajrm0xmtn1qt71wkdnbgq7dlu...@mail.gmail.com,
   Ralf Gommers ralf.gomm...@gmail.com wrote:
 
  Hi all,
 
  Building binaries for releases is currently quite complex and
  time-consuming. For OS X we need two different machines, because we still
  provide binaries for OS X 10.5 and PPC machines. I propose to not do this
  anymore. It doesn't mean we completely drop support for 10.5 and PPC, just
  that we don't produce binaries. PPC was phased out in 2006 and OS X 10.6
  came out in 2009, so there can't be a lot of demand for it (and the
  download stats at
  http://sourceforge.net/projects/numpy/files/NumPy/1.7.1/confirm this).
 
  Furthermore I propose to not provide 2.6 binaries anymore. Downloads of 
  2.6
  OS X binaries were 5% of the 2.7 ones. We did the same with 2.4 for a 
  long
  time - support it but no binaries.
 
  So what we'd have left at the moment is only the 64-bit/32-bit universal
  binary for 10.6 and up. What we finally need to add is 3.x OS X binaries.
  We can make an attempt to build these on 10.8 - since we have access to a
  hosted 10.8 Mac Mini it would allow all devs to easily do a release
  (leaving aside the Windows issue). If anyone has tried the 10.6 SDK on 
  10.8
  and knows if it actually works, that would be helpful.
 
  Any concerns, objections?
 
  I am in strong agreement.
 
  I'll be interested to learn how you make binary installers for python
  3.x because the standard version of bdist_mpkg will not do it. I have
  heard of two other projects (forks or variants of bdist_mpkg) that will,
  but I have no idea of either is supported.

 I think I'm the owner of one of the forks; I supporting it, but I
 should certainly make a release soon too.

 That sounds promising. Can you suggest a non-released commit that is
 stable enough to try, or should we wait for a release?

It has hardly changed since the Python 3 port - the current head
should be fine, I'm using it for our installers.  But I will get to a
release soon.

 Also, is there a way to combine multiple packages into one binary
 installer? (matplotib used to include python-dateutil, pytz and six, but
 1.3 does not).

Well - yes - by hacking.  I did something like this to make huge
scientific python installer for a course I'm teaching:

https://github.com/matthew-brett/reginald

Basically, you build the mpkg files for each thing you want to
install, then copy the sub-packages from the mpkg into a mpkg
megapackage (see the README for what I mean).

I should really automate this better - it was pretty easy to build a
large and useful distribution this way.

Cheers,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Stick (line segments) percolation algorithm - graph theory?

2013-08-23 Thread Josè Luis Mietta
Hi experts!
I wrote an algorithm for study stick percolation (i.e.: networks between line 
segments that intersect between them). In my algorithm N sticks (line segments) 
are created inside a rectanglar box of sides 'b' and 'h' and then, one by one, 
the algorithm explores the intersection between all line segments. This is a 
Monte Carlo simulation, so the 'experiment' is executed many times (no less 
than 100 times). Writen like that, very much RAM is consumed:
array_x1=uniform.rvs(loc=-b/2,scale=b,size=N) 
array_y1=uniform.rvs(loc=-h/2,scale=h,size=N)
array_x2=uniform.rvs(loc=-b/2,scale=b,size=N) 
array_y2=uniform.rvs(loc=-h/2,scale=h,size=N) 

M =np.zeros([N,N]) 

foru inxrange(100): This'100'isthe number of experiments.
    forj inxrange(N):
        ifj0:
            x_A1B1 =array_x2[j]-array_x1[j]
            y_A1B1 =array_y2[j]-array_y1[j]
            x_A1A2 =array_x1[0:j]-array_x1[j]
            y_A1A2 =array_y1[0:j]-array_y1[j]     
            x_A2A1 =-1*x_A1A2
            y_A2A1 =-1*y_A1A2
            x_A2B2 =array_x2[0:j]-array_x1[0:j]
            y_A2B2 =array_y2[0:j]-array_y1[0:j]
            x_A1B2 =array_x2[0:j]-array_x1[j]
            y_A1B2 =array_y2[0:j]-array_y1[j]
            x_A2B1 =array_x2[j]-array_x1[0:j]
            y_A2B1 =array_y2[j]-array_y1[0:j]

            p1 =x_A1B1*y_A1A2 -y_A1B1*x_A1A2
            p2 =x_A1B1*y_A1B2 -y_A1B1*x_A1B2
            p3 =x_A2B2*y_A2B1 -y_A2B2*x_A2B1
            p4 =x_A2B2*y_A2A1 -y_A2B2*x_A2A1

            condition_1=p1*p2
            condition_2=p3*p4                         

            fork inxrange (j):
                ifcondicion_1[k]=0andcondicion_2[k]=0:
                    M[j,k]=1

        ifj+1N+4:
            x_A1B1 =array_x2[j]-array_x1[j]
            y_A1B1 =array_y2[j]-array_y1[j]
            x_A1A2 =array_x1[j+1:]-array_x1[j]
            y_A1A2 =array_y1[j+1:]-array_y1[j]    
            x_A2A1 =-1*x_A1A2
            y_A2A1 =-1*y_A1A2
            x_A2B2 =array_x2[j+1:]-array_x1[j+1:]
            y_A2B2 =array_y2[j+1:]-array_y1[j+1:]
            x_A1B2 =array_x2[j+1:]-array_x1[j]
            y_A1B2 =array_y2[j+1:]-array_y1[j]
            x_A2B1 =array_x2[j]-array_x1[j+1:]
            y_A2B1 =array_y2[j]-array_y1[j+1:]

            p1 =x_A1B1*y_A1A2 -y_A1B1*x_A1A2
            p2 =x_A1B1*y_A1B2 -y_A1B1*x_A1B2
            p3 =x_A2B2*y_A2B1 -y_A2B2*x_A2B1
            p4 =x_A2B2*y_A2A1 -y_A2B2*x_A2A1

            condicion_1=p1*p2
            condicion_2=p3*p4                         

            fork inxrange (N-j-1):
                ifcondicion_1[k]=0andcondicion_2[k]=0:
                    M[j,k+j+1]=1

Here, the element Mij=1 if stick i intersect stick j and Mij=0 if not.
How can i optimize my algorithm? Graph theory is usefull in this case?
Waiting for your answers.
Thanks a lot!
Best regards___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X binaries for releases

2013-08-23 Thread Matthew Brett
Hi,

On Fri, Aug 23, 2013 at 1:38 PM, Matthew Brett matthew.br...@gmail.com wrote:
 Hi,

 On Fri, Aug 23, 2013 at 1:32 PM, Russell E. Owen ro...@uw.edu wrote:
 In article
 CAH6Pt5o32Otdhk2Ms5Cy5Zo=mn48h8x2wbswk92etub4mmr...@mail.gmail.com,
  Matthew Brett matthew.br...@gmail.com wrote:

 Hi,

 On Thu, Aug 22, 2013 at 12:14 PM, Russell E. Owen ro...@uw.edu wrote:
  In article
  cabl7cqjacxp2grtt8hvmayajrm0xmtn1qt71wkdnbgq7dlu...@mail.gmail.com,
   Ralf Gommers ralf.gomm...@gmail.com wrote:
 
  Hi all,
 
  Building binaries for releases is currently quite complex and
  time-consuming. For OS X we need two different machines, because we still
  provide binaries for OS X 10.5 and PPC machines. I propose to not do this
  anymore. It doesn't mean we completely drop support for 10.5 and PPC, 
  just
  that we don't produce binaries. PPC was phased out in 2006 and OS X 10.6
  came out in 2009, so there can't be a lot of demand for it (and the
  download stats at
  http://sourceforge.net/projects/numpy/files/NumPy/1.7.1/confirm this).
 
  Furthermore I propose to not provide 2.6 binaries anymore. Downloads of 
  2.6
  OS X binaries were 5% of the 2.7 ones. We did the same with 2.4 for a 
  long
  time - support it but no binaries.
 
  So what we'd have left at the moment is only the 64-bit/32-bit universal
  binary for 10.6 and up. What we finally need to add is 3.x OS X binaries.
  We can make an attempt to build these on 10.8 - since we have access to a
  hosted 10.8 Mac Mini it would allow all devs to easily do a release
  (leaving aside the Windows issue). If anyone has tried the 10.6 SDK on 
  10.8
  and knows if it actually works, that would be helpful.
 
  Any concerns, objections?
 
  I am in strong agreement.
 
  I'll be interested to learn how you make binary installers for python
  3.x because the standard version of bdist_mpkg will not do it. I have
  heard of two other projects (forks or variants of bdist_mpkg) that will,
  but I have no idea of either is supported.

 I think I'm the owner of one of the forks; I supporting it, but I
 should certainly make a release soon too.

 That sounds promising. Can you suggest a non-released commit that is
 stable enough to try, or should we wait for a release?

 It has hardly changed since the Python 3 port - the current head
 should be fine, I'm using it for our installers.  But I will get to a
 release soon.

I did a release : https://pypi.python.org/pypi/bdist_mpkg/

Please let me know if you hit any problems...

Cheers,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion