Re: [Numpy-discussion] rant against from numpy import * / from pylab import *

2007-08-02 Thread Sebastian Haase
Hi all,
Here a quick update:
I'm trying to have a concise / sparse module with containing only
pylab-specific names and not all names I already have in numpy.
To easy typing I want to call numpy N and my pylab P.

I'm now using this code:
code snipplet for importing matplotlib
import matplotlib, new
matplotlib.use('WXAgg')
from  matplotlib import pylab
P = new.module(pylab_sparse,pylab module minus stuff alreay
in numpy)
for k,v in pylab.__dict__.iteritems():
try:
   if v is N.__dict__[k]:
   continue
except KeyError:
   pass
P.__dict__[k] = v

P.ion()
del matplotlib, new, pylab
/code sniplet for importing matplotlib

The result is some reduction in the number of non-pylab-specific
names in my P-module. However there seem to be still many extra
names left, like e.g.:
alltrue, amax, array, ...
look at this:
# 20070802
#  len(dir(pylab))
# 441
#  len(dir(P))
# 346
#  P.nx.numpy.__version__
# '1.0.1'
#  N.__version__
# '1.0.1'
#  N.alltrue
# function alltrue at 0x01471B70
#  P.alltrue
# function alltrue at 0x019142F0
#  N.alltrue.__doc__
# 'Perform a logical_and over the given axis.'
#  P.alltrue.__doc__
#  #N.alltrue(x, axis=None, out=None)
#  #P.alltrue(x, axis=0)

I'm using matplotlib with
__version__  = '0.90.0'
__revision__ = '$Revision: 3003 $'
__date__ = '$Date: 2007-02-06 22:24:06 -0500 (Tue, 06 Feb 2007) $'


Any hint how to further reduce the number of names in P ?
My ideal would be that the P module (short for pylab) would only
contain the stuff described in the __doc__ strings of `pylab.py` and
`__init__.py`(in matplotlib)  (+ plus some extra, undocumented, yet
pylab specific things)

Thanks
-Sebastian


On 3/16/07, Eric Firing [EMAIL PROTECTED] wrote:
 Sebastian Haase wrote:
  Hi!
  I use the wxPython PyShell.
  I like especially the feature that when typing a module and then the
  dot . I get a popup list of all available functions (names) inside
  that module.
 
  Secondly,  I think it really makes code clearer when one can see where
  a function comes from.
 
  I have a default
  import numpy as N
  executed before my shell even starts.
  In fact I have a bunch of my standard modules imported as some
  single capital letter.
 
  This - I think - is a good compromise to the commonly used extra
  typing and unreadable  argument.
 
  a = sin(b) * arange(10,50, .1) * cos(d)
  vs.
  a = N.sin(b) * N.arange(10,50, .1) * N.cos(d)

 I generally do the latter, but really, all those N. bits are still
 visual noise when it comes to reading the code--that is, seeing the
 algorithm rather than where the functions come from.  I don't think
 there is anything wrong with explicitly importing commonly-used names,
 especially things like sin and cos.

 
  I would like to hear some comments by others.
 
 
  On a different note: I just started using pylab, so I did added an
  automatic  from matplotlib import pylab as P -- but now P contains
  everything that I already have in N.  It makes it really hard to
  *find* (as in *see* n the popup-list) the pylab-only functions. --
  what can I do about this ?

 A quick and dirty solution would be to comment out most of the imports
 in pylab.py; they are not needed for the pylab functions and are there
 only to give people lots of functionality in a single namespace.

 I am cross-posting this to matplotlib-users because it involves mpl, and
 an alternative solution would be for us to add an rcParam entry to allow
 one to turn off all of the namespace consolidation.  A danger is that if
 someone is using from pylab import * in a script, then whether it
 would run would depend on the matplotlibrc file.  To get around that,
 another possibility would be to break pylab.py into two parts, with
 pylab.py continuing to do the namespace consolidation and importing the
 second part, which would contain the actual pylab functions.  Then if
 you don't want the namespace consolidation, you could simply import the
 second part instead of pylab.  There may be devils in the details, but
 it seems to me that this last alternative--splitting pylab.py--might
 make a number of people happier while having no adverse effects on
 everyone else.

 Eric
 
 
  Thanks,
  Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fourier with single precision

2007-08-02 Thread Lars Friedrich
Hello,

David Cournapeau wrote:
 As far as I can read from the fft code in numpy, only double is 
 supported at the moment, unfortunately. Note that you can get some speed 
 by using scipy.fftpack methods instead, if scipy is an option for you.

What I understood is that numpy uses FFTPACK's algorithms. From 
www.netlib.org/fftpack (is this the right address?) I took that there is 
a single-precision and double-precision-version of the algorithms. How 
hard would it be (for example for me...) to add the single-precision 
versions to numpy? I am not a decent C-hacker, but if someone tells me, 
that this task is not *too* hard, I would start looking more closely at 
the code...

Would it make sense, that if one passes an array of dtype = 
numpy.float32 to the fft function, a complex64 is returned, and if one 
passes an array of dtype = numpy.float64, a complex128 is returned?

Lars
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] 16bit Integer Array/Scalar Inconsistency

2007-08-02 Thread Ryan May
Hi,

I ran into this while debugging a script today:

In [1]: import numpy as N

In [2]: N.__version__
Out[2]: '1.0.3'

In [3]: d = N.array([32767], dtype=N.int16)

In [4]: d + 32767
Out[4]: array([-2], dtype=int16)

In [5]: d[0] + 32767
Out[5]: 65534

In [6]: type(d[0] + 32767)
Out[6]: type 'numpy.int64'

In [7]: type(d[0])
Out[7]: type 'numpy.int16'

It seems that numpy will automatically promote the scalar to avoid
overflow, but not in the array case.  Is this inconsistency a bug, just
a (known) gotcha?

I myself don't have any problems with the array not being promoted
automatically, but the inconsistency with scalar operation made
debugging my problem more difficult.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] reference leacks in numpy.asarray

2007-08-02 Thread Lisandro Dalcin
using numpy-1.0.3, I believe there are a reference leak somewhere.
Using a debug build of Python 2.5.1 (--with-pydebug), I get the
following

import sys, gc
import numpy

def testleaks(func, args=(), kargs={}, repeats=5):
for i in xrange(repeats):
r1 = sys.gettotalrefcount()
func(*args,**kargs)
r2 = sys.gettotalrefcount()
rd = r2-r1
print 'before: %d, after: %d, diff: [%d]' % (r1, r2, rd)

def npy_asarray_1():
a = numpy.zeros(5, dtype=int)
b = numpy.asarray(a, dtype=float)
del a, b

def npy_asarray_2():
a = numpy.zeros(5, dtype=float)
b = numpy.asarray(a, dtype=float)
del a, b

if __name__ == '__main__':
testleaks(npy_asarray_1)
testleaks(npy_asarray_2)


$ python npyleaktest.py
before: 84531, after: 84532, diff: [1]
before: 84534, after: 84534, diff: [0]
before: 84534, after: 84534, diff: [0]
before: 84534, after: 84534, diff: [0]
before: 84534, after: 84534, diff: [0]
before: 84531, after: 84533, diff: [2]
before: 84535, after: 84536, diff: [1]
before: 84536, after: 84537, diff: [1]
before: 84537, after: 84538, diff: [1]
before: 84538, after: 84539, diff: [1]

It seems npy_asarray_2() is leaking a reference. I am  missing
something here?. The same problem is found in C, using
PyArray_FROM_OTF (no time to go inside to see what's going on, sorry)

If this is know and solved in SVN, please forget me.

Regards,


-- 
Lisandro Dalcín
---
Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
PTLC - Güemes 3450, (3000) Santa Fe, Argentina
Tel/Fax: +54-(0)342-451.1594
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 16bit Integer Array/Scalar Inconsistency

2007-08-02 Thread Robert Kern
Ryan May wrote:
 Hi,
 
 I ran into this while debugging a script today:
 
 In [1]: import numpy as N
 
 In [2]: N.__version__
 Out[2]: '1.0.3'
 
 In [3]: d = N.array([32767], dtype=N.int16)
 
 In [4]: d + 32767
 Out[4]: array([-2], dtype=int16)
 
 In [5]: d[0] + 32767
 Out[5]: 65534
 
 In [6]: type(d[0] + 32767)
 Out[6]: type 'numpy.int64'
 
 In [7]: type(d[0])
 Out[7]: type 'numpy.int16'
 
 It seems that numpy will automatically promote the scalar to avoid
 overflow, but not in the array case.  Is this inconsistency a bug, just
 a (known) gotcha?

Known feature. When arrays and scalars are mixed and the types are within the
same kind (e.g. both are integer types just at different precisions), the type
of the scalar is ignored. This solves one of the usability issues with trying to
use lower precisions; you still want to be able to divide by 2.0, for example,
without automatically up-casting your very large float32 array.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fourier with single precision

2007-08-02 Thread Warren Focke


On Thu, 2 Aug 2007, Lars Friedrich wrote:

 What I understood is that numpy uses FFTPACK's algorithms.

Sort of.  It appears to be a hand translation from F77 to C.

 From www.netlib.org/fftpack (is this the right address?) I took that 
 there is a single-precision and double-precision-version of the 
 algorithms. How hard would it be (for example for me...) to add the 
 single-precision versions to numpy? I am not a decent C-hacker, but if 
 someone tells me, that this task is not *too* hard, I would start 
 looking more closely at the code...

It shouldn't be hard.  fftpack.c will make a single-precision version if 
DOUBLE is not defined at compile time.

 Would it make sense, that if one passes an array of dtype =
 numpy.float32 to the fft function, a complex64 is returned, and if one
 passes an array of dtype = numpy.float64, a complex128 is returned?

Sounds like reasonable default behavior.  Might be useful if the caller 
could overrride it.


w

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] reference leacks in numpy.asarray

2007-08-02 Thread Timothy Hochberg
On 8/2/07, Lisandro Dalcin [EMAIL PROTECTED] wrote:

 using numpy-1.0.3, I believe there are a reference leak somewhere.
 Using a debug build of Python 2.5.1 (--with-pydebug), I get the
 following

 import sys, gc
 import numpy

 def testleaks(func, args=(), kargs={}, repeats=5):
 for i in xrange(repeats):
 r1 = sys.gettotalrefcount()
 func(*args,**kargs)
 r2 = sys.gettotalrefcount()
 rd = r2-r1
 print 'before: %d, after: %d, diff: [%d]' % (r1, r2, rd)

 def npy_asarray_1():
 a = numpy.zeros(5, dtype=int)
 b = numpy.asarray(a, dtype=float)
 del a, b

 def npy_asarray_2():
 a = numpy.zeros(5, dtype=float)
 b = numpy.asarray(a, dtype=float)
 del a, b

 if __name__ == '__main__':
 testleaks(npy_asarray_1)
 testleaks(npy_asarray_2)


 $ python npyleaktest.py
 before: 84531, after: 84532, diff: [1]
 before: 84534, after: 84534, diff: [0]
 before: 84534, after: 84534, diff: [0]
 before: 84534, after: 84534, diff: [0]
 before: 84534, after: 84534, diff: [0]
 before: 84531, after: 84533, diff: [2]
 before: 84535, after: 84536, diff: [1]
 before: 84536, after: 84537, diff: [1]
 before: 84537, after: 84538, diff: [1]
 before: 84538, after: 84539, diff: [1]

 It seems npy_asarray_2() is leaking a reference. I am  missing
 something here?. The same problem is found in C, using
 PyArray_FROM_OTF (no time to go inside to see what's going on, sorry)

 If this is know and solved in SVN, please forget me.


I don't have a debug build handy to test this on, but this might not be a
reference leak. Since you are checking the count before and after each
cycle, it could be that there are cycles being created that are subsequently
cleaned up by the garbage collector.

Can you try instead to look at the difference between the reference count at
the end of each cycle with the reference count before the first cycle? If
that goes up indefinitely, then it's probably a leak. If it bounces around
or levels off, then probably not. You'd probably want to run a bunch of
repeats just to be sure.
regards,
-tim


Regards,


 --
 Lisandro Dalcín
 ---
 Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
 Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
 Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
 PTLC - Güemes 3450, (3000) Santa Fe, Argentina
 Tel/Fax: +54-(0)342-451.1594
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




-- 
.  __
.   |-\
.
.  [EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] reference leacks in numpy.asarray

2007-08-02 Thread Lisandro Dalcin
Ups, I forgot to mention I was using gc.collect(), I accidentally
cleaned it my mail

Anyway, the following

import sys, gc
import numpy

def test():
a = numpy.zeros(5, dtype=float)
while 1:
gc.collect()
b = numpy.asarray(a, dtype=float); del b
gc.collect()
print sys.gettotalrefcount()

test()

shows in mi box alway 1 more totalrefcount in each pass, so always
increasing. IMHO, I still think there is a leak somewere.

And now, I am not sure if PyArray_FromAny is the source of the problem.



On 8/2/07, Timothy Hochberg [EMAIL PROTECTED] wrote:



 On 8/2/07, Lisandro Dalcin [EMAIL PROTECTED] wrote:
  using numpy-1.0.3, I believe there are a reference leak somewhere.
  Using a debug build of Python 2.5.1 (--with-pydebug), I get the
  following
 
  import sys, gc
  import numpy
 
  def testleaks(func, args=(), kargs={}, repeats=5):
  for i in xrange(repeats):
  r1 = sys.gettotalrefcount()
  func(*args,**kargs)
  r2 = sys.gettotalrefcount()
  rd = r2-r1
  print 'before: %d, after: %d, diff: [%d]' % (r1, r2, rd)
 
  def npy_asarray_1():
  a = numpy.zeros(5, dtype=int)
  b = numpy.asarray(a, dtype=float)
  del a, b
 
  def npy_asarray_2():
  a = numpy.zeros(5, dtype=float)
  b = numpy.asarray(a, dtype=float)
  del a, b
 
  if __name__ == '__main__':
  testleaks(npy_asarray_1)
  testleaks(npy_asarray_2)
 
 
  $ python npyleaktest.py
  before: 84531, after: 84532, diff: [1]
  before: 84534, after: 84534, diff: [0]
  before: 84534, after: 84534, diff: [0]
  before: 84534, after: 84534, diff: [0]
  before: 84534, after: 84534, diff: [0]
  before: 84531, after: 84533, diff: [2]
  before: 84535, after: 84536, diff: [1]
  before: 84536, after: 84537, diff: [1]
  before: 84537, after: 84538, diff: [1]
  before: 84538, after: 84539, diff: [1]
 
  It seems npy_asarray_2() is leaking a reference. I am  missing
  something here?. The same problem is found in C, using
  PyArray_FROM_OTF (no time to go inside to see what's going on, sorry)
 
  If this is know and solved in SVN, please forget me.

 I don't have a debug build handy to test this on, but this might not be a
 reference leak. Since you are checking the count before and after each
 cycle, it could be that there are cycles being created that are subsequently
 cleaned up by the garbage collector.

 Can you try instead to look at the difference between the reference count at
 the end of each cycle with the reference count before the first cycle? If
 that goes up indefinitely, then it's probably a leak. If it bounces around
 or levels off, then probably not. You'd probably want to run a bunch of
 repeats just to be sure.
 regards,
 -tim

  Regards,
 
 
  --
  Lisandro Dalcín
  ---
  Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
  Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
  Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
  PTLC - Güemes 3450, (3000) Santa Fe, Argentina
  Tel/Fax: +54-(0)342-451.1594
  ___
  Numpy-discussion mailing list
  Numpy-discussion@scipy.org
 
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
 



 --
 .  __
 .   |-\
 .
 .   [EMAIL PROTECTED]
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




-- 
Lisandro Dalcín
---
Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
PTLC - Güemes 3450, (3000) Santa Fe, Argentina
Tel/Fax: +54-(0)342-451.1594
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] reference leacks in numpy.asarray

2007-08-02 Thread Lisandro Dalcin
I think the problem is in  _array_fromobject (seen as numpy.array in Python)

This function parses its arguments by using the convertor
PyArray_DescrConverter2. which RETURNS A NEW REFERENCE!!! This
reference is never DECREF'ed.

BTW, A lesson I've learned of the pattern

if (!PyArg_ParseXXX()) return NULL

is that convertor functions should NEVER return new references to
PyObject*'s,  because if the conversion fails (because of latter wrong
argument), you leak a reference to the 'converted' object.

If this pattern is used everywhere in numpy, well, there are big
chances of leaking references in the case of bad args to C functions.

Regards,



On 8/2/07, Timothy Hochberg [EMAIL PROTECTED] wrote:



 On 8/2/07, Lisandro Dalcin [EMAIL PROTECTED] wrote:
  using numpy-1.0.3, I believe there are a reference leak somewhere.
  Using a debug build of Python 2.5.1 (--with-pydebug), I get the
  following
 
  import sys, gc
  import numpy
 
  def testleaks(func, args=(), kargs={}, repeats=5):
  for i in xrange(repeats):
  r1 = sys.gettotalrefcount()
  func(*args,**kargs)
  r2 = sys.gettotalrefcount()
  rd = r2-r1
  print 'before: %d, after: %d, diff: [%d]' % (r1, r2, rd)
 
  def npy_asarray_1():
  a = numpy.zeros(5, dtype=int)
  b = numpy.asarray(a, dtype=float)
  del a, b
 
  def npy_asarray_2():
  a = numpy.zeros(5, dtype=float)
  b = numpy.asarray(a, dtype=float)
  del a, b
 
  if __name__ == '__main__':
  testleaks(npy_asarray_1)
  testleaks(npy_asarray_2)
 
 
  $ python npyleaktest.py
  before: 84531, after: 84532, diff: [1]
  before: 84534, after: 84534, diff: [0]
  before: 84534, after: 84534, diff: [0]
  before: 84534, after: 84534, diff: [0]
  before: 84534, after: 84534, diff: [0]
  before: 84531, after: 84533, diff: [2]
  before: 84535, after: 84536, diff: [1]
  before: 84536, after: 84537, diff: [1]
  before: 84537, after: 84538, diff: [1]
  before: 84538, after: 84539, diff: [1]
 
  It seems npy_asarray_2() is leaking a reference. I am  missing
  something here?. The same problem is found in C, using
  PyArray_FROM_OTF (no time to go inside to see what's going on, sorry)
 
  If this is know and solved in SVN, please forget me.

 I don't have a debug build handy to test this on, but this might not be a
 reference leak. Since you are checking the count before and after each
 cycle, it could be that there are cycles being created that are subsequently
 cleaned up by the garbage collector.

 Can you try instead to look at the difference between the reference count at
 the end of each cycle with the reference count before the first cycle? If
 that goes up indefinitely, then it's probably a leak. If it bounces around
 or levels off, then probably not. You'd probably want to run a bunch of
 repeats just to be sure.
 regards,
 -tim

  Regards,
 
 
  --
  Lisandro Dalcín
  ---
  Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
  Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
  Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
  PTLC - Güemes 3450, (3000) Santa Fe, Argentina
  Tel/Fax: +54-(0)342-451.1594
  ___
  Numpy-discussion mailing list
  Numpy-discussion@scipy.org
 
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
 



 --
 .  __
 .   |-\
 .
 .   [EMAIL PROTECTED]
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




-- 
Lisandro Dalcín
---
Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
PTLC - Güemes 3450, (3000) Santa Fe, Argentina
Tel/Fax: +54-(0)342-451.1594
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fourier with single precision

2007-08-02 Thread Charles R Harris
On 8/2/07, Warren Focke [EMAIL PROTECTED] wrote:



 On Thu, 2 Aug 2007, Lars Friedrich wrote:

  What I understood is that numpy uses FFTPACK's algorithms.

 Sort of.  It appears to be a hand translation from F77 to C.

  From www.netlib.org/fftpack (is this the right address?) I took that
  there is a single-precision and double-precision-version of the
  algorithms. How hard would it be (for example for me...) to add the
  single-precision versions to numpy? I am not a decent C-hacker, but if
  someone tells me, that this task is not *too* hard, I would start
  looking more closely at the code...

 It shouldn't be hard.  fftpack.c will make a single-precision version if
 DOUBLE is not defined at compile time.

  Would it make sense, that if one passes an array of dtype =
  numpy.float32 to the fft function, a complex64 is returned, and if one
  passes an array of dtype = numpy.float64, a complex128 is returned?

 Sounds like reasonable default behavior.  Might be useful if the caller
 could overrride it.


On X86 machines the main virtue would be smaller and more cache friendly
arrays because double precision arithmetic is about the same speed as single
precision, sometimes even a bit faster. The PPC architecture does have
faster single than double precision, so there it could make a difference.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] reference leacks in numpy.asarray

2007-08-02 Thread Lisandro Dalcin
This patch corrected the problem for me, numpy test pass...


On 8/2/07, Lisandro Dalcin [EMAIL PROTECTED] wrote:
 I think the problem is in  _array_fromobject (seen as numpy.array in Python)


-- 
Lisandro Dalcín
---
Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
PTLC - Güemes 3450, (3000) Santa Fe, Argentina
Tel/Fax: +54-(0)342-451.1594


array.patch
Description: Binary data
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fourier with single precision

2007-08-02 Thread Warren Focke


On Thu, 2 Aug 2007, Charles R Harris wrote:

 On X86 machines the main virtue would be smaller and more cache friendly
 arrays because double precision arithmetic is about the same speed as single
 precision, sometimes even a bit faster. The PPC architecture does have
 faster single than double precision, so there it could make a difference.

Yeah, I was wondering if I should mention that.  I think SSE has real 
single precision, if you can convince the compiler to do it that way. 
Even better if it could be vectorized with SSE.

w

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion