Re: [Numpy-discussion] numpy 1.9b1 bug in pad function?

2014-06-14 Thread Stefan van der Walt
On 2014-06-14 14:40:29, Nadav Horesh nad...@visionsense.com wrote:
 This is most likely a documentation error since:

 In [7]: np.pad(a)
 ---
 TypeError Traceback (most recent call last)
 ipython-input-7-7a0346d77134 in module()
  1 np.pad(a)

 TypeError: pad() missing 1 required positional argument: 'pad_width'

That is because the signature is

pad(array, pad_width, mode=None, ...)

But mode *does* need to be specified.  I see why this can be confusing,
though, so perhaps we should simply make mode a positional argument too.

Stéfan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy 1.9b1 bug in pad function?

2014-06-14 Thread Stefan van der Walt
On 2014-06-14 14:40:29, Nadav Horesh nad...@visionsense.com wrote:
 TypeError: pad() missing 1 required positional argument: 'pad_width'

I've added a PR here for further discussion:

https://github.com/numpy/numpy/pull/4808

Stéfan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] slow numpy.clip ?

2006-12-18 Thread Stefan van der Walt
Hi David

The benchmark below isn't quite correct.  In clip2_bench the data is
effectively only clipped once.  I attach a slightly modified version,
for which the benchmark results look like this:

   4 function calls in 4.631 CPU seconds

   Ordered by: cumulative time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
10.0030.0034.6314.631 clipb.py:10(bench_clip)
12.1492.1492.1492.149 clipb.py:16(clip1_bench)
12.0702.0702.0702.070 clipb.py:19(clip2_bench)
10.4090.4090.4090.409 clipb.py:6(generate_data_2d)
00.000 0.000  profile:0(profiler)


The remaining difference is probably a cache effect.  If I change the
order, so that clip1_bench is executed last, I see:

   4 function calls in 5.250 CPU seconds

   Ordered by: cumulative time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
10.0030.0035.2505.250 clipb.py:10(bench_clip)
12.5882.5882.5882.588 clipb.py:19(clip2_bench)
12.1482.1482.1482.148 clipb.py:16(clip1_bench)
10.5120.5120.5120.512 clipb.py:6(generate_data_2d)
00.000 0.000  profile:0(profiler)


Regards
Stéfan

On Mon, Dec 18, 2006 at 04:17:08PM +0900, David Cournapeau wrote:
 Hi,
 
 When trying to speed up some matplotlib routines with the matplotlib 
 dev team, I noticed that numpy.clip is pretty slow: clip(data, m, M) is 
 slower than a direct numpy implementation (that is data[datam] = m; 
 data[dataM] = M; return data.copy()). My understanding is that the code 
 does the same thing, right ?
 
 Below, a small script which shows the difference (twice slower for a 
 8000x256 array on my workstation):
 

[...]
import numpy as N

#==
# To benchmark imshow alone
#==
def generate_data_2d(fr, nwin, hop, len):
nframes = 1.0 * fr / hop * len
return N.random.randn(nframes, nwin)

def bench_clip():
m   = -1.
M   = 1.
# 2 minutes (120 sec) of sounds @ 8 kHz with 256 samples with 50 % overlap
data= generate_data_2d(8000, 256, 128, 120)

def clip1_bench(data, niter):
for i in range(niter):
blop= data.clip(m, M)
def clip2_bench(data, niter):
for i in range(niter):
blop = data.copy()
blop[blopm]= m
blop[blopM]= M

clip2_bench(data, 10)
clip1_bench(data, 10)

if __name__ == '__main__':
# test clip
import hotshot, hotshot.stats
profile_file= 'clip.prof'
prof= hotshot.Profile(profile_file, lineevents=1)
prof.runcall(bench_clip)
p = hotshot.stats.load(profile_file)
print p.sort_stats('cumulative').print_stats(20)
prof.close()
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] slow numpy.clip ?

2006-12-18 Thread Stefan van der Walt
On Mon, Dec 18, 2006 at 05:45:09PM +0900, David Cournapeau wrote:
 Yes, I of course mistyped the  and the copy. But the function is still 
 moderately faster on my workstation:
 
   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
 10.0030.0033.9443.944 slowclip.py:10(bench_clip)
 10.0110.0112.0012.001 slowclip.py:16(clip1_bench)
101.9900.1991.9900.199 
 /home/david/local/lib/python2.4/site-packages/numpy/core/fromnumeric.py:372(clip)
 11.6821.6821.6821.682 slowclip.py:19(clip2_bench)
 10.2580.2580.2580.258 
 slowclip.py:6(generate_data_2d)
 00.000 0.000  profile:0(profiler)

Did you try swapping the order of execution (i.e. clip1 second)?

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Unexpected output using numpy.ndarray and __radd__

2006-12-18 Thread Stefan van der Walt
Hi Mark

On Mon, Dec 18, 2006 at 08:30:20AM +0100, Mark Hoffmann wrote:
 The following issue has puzzled me for a while. I want to add a numpy.ndarray
 and an instance of my own class. I define this operation by implementing the
 methods __add__ and __radd__. My programme (including output) looks like:
 
 #!/usr/local/bin/python
 
 import numpy
 
 class Cyclehist:
 def __init__(self,vals):
 self.valuearray = numpy.array(vals)
 
 def __str__(self):
 return 'Cyclehist object: valuearray = '+str(self.valuearray)
 
 def __add__(self,other):
 print __add__ : ,self,other
 return self.valuearray + other

 def __radd__(self,other):
 print __radd__ : ,self,other
 return other + self.valuearray
 
 c = Cyclehist([1.0,-21.2,3.2])
 a = numpy.array([-1.0,2.2,-2.2])
 print c + a
 print a + c

In the first instance, c.__add__(a) is called, which works fine.  In
the second, a.__add__(c) is executed, which is your problem, since you
rather want c.__radd__(a) to be executed.  A documentation snippets:

For instance, to evaluate the expression x-y, where y is an
instance of a class that has an __rsub__() method, y.__rsub__(x) is
called if x.__sub__(y) returns NotImplemented.

Note: If the right operand's type is a subclass of the left operand's
type and that subclass provides the reflected method for the
operation, this method will be called before the left operand's
non-reflected method. This behavior allows subclasses to override
their ancestors' operations.

Since a.__add__ does not return NotImplemented, c.__radd__ is not
called where you expect it to be.  I am not sure why broadcasting
takes place here, maybe someone else on the list can elaborate.

To solve your problem, you may want to look into subclassing ndarrays,
as described at http://www.scipy.org/Subclasses.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] subclassing float64 (and friends)

2007-01-03 Thread Stefan van der Walt
On Wed, Jan 03, 2007 at 04:29:10AM -0600, eric jones wrote:
 I am playing around with sub-classing the new-fangled float64 objects 
 and friends.  I really like the new ndarray subclassing features 
 (__array_finalize__, etc.), and was exploring whether or not the scalars 
 worked the same way.  I've stubbed my toe right out of the blocks 
 though.  I can sub-class from standard python floats just fine, but when 
 I try to do the same from float64, I get a traceback. (examples below)  
 Anyone have ideas on how to do this correctly?
 
 from numpy import float64
 
 class MyFloat2(float64):
 
 def __new__(cls, data, my_attr=None):
 obj = float64.__new__(cls, data)
 obj.my_attr = my_attr
 return obj
 
 a = MyFloat2(1.2,my_attr=hello)
 print a, a.my_attr
 
 
 output:
 Traceback (most recent call last):
   File C:\wrk\eric\trunk\src\lib\geo\examples\scalar_subtype.py, 
 line 33, in ?
 a = MyFloat2(1.2,my_attr=hello)
   File C:\wrk\eric\trunk\src\lib\geo\examples\scalar_subtype.py, 
 line 30, in __new__
 obj.my_attr = my_attr
 AttributeError: 'numpy.float64' object has no attribute 'my_attr'

With classes defined in C I've often noticed that you can't add
attributes, i.e.

f = N.float64(1.2)
f.x = 1

breaks with

AttributeError: 'numpy.float64' object has no attribute 'x'

The way to fix this for arrays is to first view the array as the new
subclass, i.e.

x = N.array([1])
class Ary(N.ndarray):
pass
x = x.view(Ary)
x.y = 1

However, with floats I noticed that the following fails:

import numpy as N
f = N.float64(1.2)
class Floaty(N.float64):
pass
f.view(Floaty)

with

TypeError: data type cannot be determined from type object

Maybe this is part of the problem?

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] discussion about array.resize() -- compare to numarray

2007-01-08 Thread Stefan van der Walt
On Fri, Jan 05, 2007 at 01:57:50PM -0800, Russell E Owen wrote:
 I also checked the numpy 1.0.1 help and I confess I don't understand at 
 all what it claims to do if the new size is larger. It first says it 
 repeats a and then it says it zero-fills the output.
 
  help(numpy.resize)
 Help on function resize in module numpy.core.fromnumeric:
 
 resize(a, new_shape)
 resize(a,new_shape) returns a new array with the specified shape.
 The original array's total size can be any size. It
 fills the new array with repeated copies of a.
 
 Note that a.resize(new_shape) will fill array with 0's
 beyond current definition of a.

The docstring refers to the difference between

N.resize(x,6)

and

x.resize(6)

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] segfault in numpy.float64(z) for complex z

2007-01-08 Thread Stefan van der Walt
On Mon, Jan 08, 2007 at 08:11:03AM -0700, Travis Oliphant wrote:
 Tim Leslie wrote:
 
 Hi All,
 
 While debugging a scipy crash I came across the problem outlined in
 
 http://projects.scipy.org/scipy/numpy/ticket/412
 
 Could someone shed some light on why this is happening, I got a bit
 lost in the numpy internals when I tried to track it down.
 
   
 
 
 Recent changes to arrtype_new no doubt.   If somebody with a non-SVN 
 version of numpy could verify that would be great.

How recent?  This was broken in r2679 already.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] discussion about array.resize() -- compare to numarray

2007-01-08 Thread Stefan van der Walt
On Mon, Jan 08, 2007 at 12:24:01PM -0800, Sebastian Haase wrote:
 On 1/8/07, Robert Kern [EMAIL PROTECTED] wrote:
  Sebastian Haase wrote:
 
   I would suggest treating this as a real bug!
   Then it could be fixed immediately.
 
  Deliberate design decisions don't turn into bugs just because you disagree 
  with
  them. Neither do those where the original decider now disagrees with them.
 
 Please explain again what the original decision was based on.
 I remember that there was an effort at some point to make methods as
 functions more consistent.

In this case, I tend to agree that the behaviour is unexpected.  In
some cases, like with sort, however, I think the difference in
behaviour is useful:

In [1]: import numpy as N

In [2]: x = N.array([3,2,1,2])

In [3]: N.sort(x) # Don't tell Guido!
Out[3]: array([1, 2, 2, 3])

In [4]: x
Out[4]: array([3, 2, 1, 2])

In [5]: x.sort()

In [6]: x
Out[6]: array([1, 2, 2, 3])

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] segfault in numpy.float64(z) for complex z

2007-01-08 Thread Stefan van der Walt
On Tue, Jan 09, 2007 at 10:38:06AM +1100, Tim Leslie wrote:
 On 1/9/07, Stefan van der Walt [EMAIL PROTECTED] wrote:
  On Mon, Jan 08, 2007 at 08:11:03AM -0700, Travis Oliphant wrote:
   Tim Leslie wrote:
  
   Hi All,
   
   While debugging a scipy crash I came across the problem outlined in
   
   http://projects.scipy.org/scipy/numpy/ticket/412
   
   Could someone shed some light on why this is happening, I got a bit
   lost in the numpy internals when I tried to track it down.
   
   
   
  
   Recent changes to arrtype_new no doubt.   If somebody with a non-SVN
   version of numpy could verify that would be great.
 
  How recent?  This was broken in r2679 already.
 
 It looks like it's due to changes in r3493
 
 http://projects.scipy.org/scipy/numpy/changeset/3493
 
 The line at which it barfs is:
 
 if (robj-ob_type == type) return robj;
 
 which is a new piece of code from this changeset.

You're right -- it wasn't broken in r2679 -- I had a newer version of
multiarray lying around.  You may proceed to kick me in the head now.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] defmatrix.sum

2007-01-08 Thread Stefan van der Walt
Hi all,

I noticed that Tim Leslie picked up on the out=None in defmatrix.sum:

def sum(self, axis=None, dtype=None, out=None):
Sum the matrix over the given axis.  If the axis is None, sum
over all dimensions.  This preserves the orientation of the
result as a row or column.

return N.ndarray.sum(self, axis, dtype, out=None)._align(axis)

Surely, that out=None should just be out?  If out is specified, should
sum() still return a value?

Cheers
Stéfan

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] No test file found

2007-01-11 Thread Stefan van der Walt
On Thu, Jan 11, 2007 at 03:18:19PM -0800, Keith Goodman wrote:
 I see a lot of 'No test file found' warnings when running
 numpy.test(). What does that mean?

It means your verbosity is set too high.  You'll find that N.test(0,0)
complains much less (although, realistically, you'd probably want to run
N.test(1,0) or simply N.test()).

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] random permutation

2007-01-13 Thread Stefan van der Walt
On Sat, Jan 13, 2007 at 10:01:59AM -0800, Keith Goodman wrote:
 On 1/11/07, Robert Kern [EMAIL PROTECTED] wrote:
  Keith Goodman wrote:
   Why is the first element of the permutation always the same? Am I
   using random.permutation in the right way?
 
   M.__version__
   '1.0rc1'
 
  This has been fixed in more recent versions.
 
http://projects.scipy.org/scipy/numpy/ticket/374
 
 I don't see any unit tests for numpy.random. I guess randomness is hard to 
 test.

Every time we fix a bug, we add a corresponding test to make sure that it
doesn't pop up again.  In this case, take a look in
numpy/core/tests/test_regression.py:

def check_random_shuffle(self, level=rlevel):
Ticket #374
a = N.arange(5).reshape((5,1))
b = a.copy()
N.random.shuffle(b)
assert_equal(sorted(b),a)

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: MaskedArray as a subclass of ndarray - followup

2007-01-19 Thread Stefan van der Walt
On Fri, Jan 19, 2007 at 10:56:16AM +0100, Sven Schreiber wrote:
 Matt Knox schrieb:
 
  
  I am definitely in favor of the new maskedarray implementation. I've been
  working with Pierre on a time series module which is a subclass of the new
  masked array implementation, and having it as a subclass of ndarray 
  definitely
  has advantages (and no real disadvantages that I am aware of).
  
 
 That time series module sounds very interesting! Is it available
 somewhere, or some documentation?

It's in the scipy sandbox.

Edit scipy/Lib/sandbox/enabled_packages.txt and add 'timeseries' on a
line of its own, then recompile.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] inconsistent behaviour of mean, average and median

2007-01-25 Thread Stefan van der Walt
Hi,

I noticed the following behaviour for empty lists:

In [4]: N.median([])
---
exceptions.IndexErrorTraceback (most recent 
call last)

/home/stefan/ipython console 

/home/stefan/lib/python2.4/site-packages/numpy/lib/function_base.py in median(m)
   1081 return sorted[index]
   1082 else:
- 1083 return (sorted[index-1]+sorted[index])/2.0
   1084 
   1085 def trapz(y, x=None, dx=1.0, axis=-1):

IndexError: index out of bounds

In [5]: N.mean([])
Out[5]: nan

In [6]: N.average([])
---
exceptions.ZeroDivisionError Traceback (most recent 
call last)

/home/stefan/ipython console 

/home/stefan/lib/python2.4/site-packages/numpy/lib/function_base.py in 
average(a, axis, weights, returned)
294 if not isinstance(d, ndarray):
295 if d == 0.0:
-- 296 raise ZeroDivisionError, 'zero denominator in average()'
297 if returned:
298 return n/d, d

ZeroDivisionError: zero denominator in average()


Which is the ideal response -- NaN or an exception, and if an exception,
of which kind?

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] average() or mean() errors

2007-01-26 Thread Stefan van der Walt
On Tue, Jan 23, 2007 at 08:29:47PM -0500, Daniel Smith wrote:
 When calling the average() or mean() functions on a small array (3 
 numbers), I am seeing significant numerical errors (on the order of 1% 
 with data to 8 significant digits). The code I am using is essentially:
 
 A = zeros(3)
 A[i] = X
 B = average(A)

I'm not sure I understand:

In [7]: A = N.zeros(3)

In [8]: A[1] = 3.

In [9]: N.average(A)
Out[9]: 1.0

In [11]: A[0] = 2.

In [12]: N.average(A)
Out[12]: 1.667

In [13]: (2+3+0)/3.
Out[13]: 1.6667

In [14]: for i in range(1000):
   : A = N.random.rand(3)
   : assert N.average(A) == N.sum(A)/3.

Maybe you can give a specific code snippet?

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Arrays of poly1d objects, is this a bug?

2007-01-26 Thread Stefan van der Walt
On Fri, Jan 26, 2007 at 12:25:27PM -0700, Fernando Perez wrote:
 Hi all,
 
 I'm puzzled by this behavior a colleague ran into:
 
 In [38]: p1=N.poly1d([1.])
 
 In [39]: a=N.array([p1],dtype='O')
 
 In [40]: a
 Out[40]: array([], shape=(1, 0), dtype=object)
 
 In [42]: print a
 []
 
 In [43]: N.__version__
 Out[43]: '1.0.2.dev3512'
 
 He saw it running r3520 as well.
 
 This looks like a bug to me,  since it seems impossible to make arrays
 of poly1d objects:
 
 In [44]: a=N.array([p1,p1],dtype='O')
 
 In [45]: a
 Out[45]: array([], shape=(2, 0), dtype=object)
 
 In [46]: print a
 []

I think the problem might be that an instance of poly1d can be viewed as an
array (e.g. N.asarray(p1)).

The following works:

x = N.empty((3,),object)
x[0] = p1

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] python's random.random() faster than numpy.random.rand() ???

2007-01-27 Thread Stefan van der Walt
Hi Mark

On Fri, Jan 26, 2007 at 10:17:58AM -0700, Mark P. Miller wrote:
 I've recently been working with numpy's random number generators and 
 noticed that python's core random number generator is faster than 
 numpy's for the uniform distribution.
 
 In other words,
 
 for a in range(100):
  b = random.random()#core python code
 
 is substantially faster than
 
 for a in range(100):
  b = numpy.random.rand()#numpy code

With numpy, you can get around the for-loop by doing

N.random.random(100)

which is much faster:

In [7]: timeit for i in range(1): random.random()
100 loops, best of 3: 3.92 ms per loop

In [8]: timeit N.random.random(1)
1000 loops, best of 3: 514 µs per loop

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Different results from repeated calculation

2007-01-27 Thread Stefan van der Walt
On Sat, Jan 27, 2007 at 03:11:58PM -0700, Charles R Harris wrote:
 Does anyone else see this happening?
 
 
 Yes,
 
 test1:  0  differences
 test2:  51  differences
 test3:  0  differences
 
  Oddly, the relative error is always the same:
 
 98 z different 2.0494565872e-16
 99 z different 2.0494565872e-16
 
 Which is nearly the same as the double precision 2.2204460492503131e-16, the
 difference being due to the fact that the precision is defined relative to 1,
 and the error in the computation are in a number relatively larger (more bits
 set, but not yet 2).
 
 So this looks like an error in the LSB of the floating number. Could be
 rounding, could be something not reset quite right. I'm thinking possibly
 hardware at this time, maybe compiler.

Interesting!  I don't see it on

Linux alpha 2.6.17-10-386 #2 Fri Oct 13 18:41:40 UTC 2006 i686 GNU/Linux
vendor_id   : AuthenticAMD
model name  : AMD Athlon(tm) XP 2400+
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat pse36 mmx fxsr sse syscall mmxext 3dnowext 3dnow up ts


but I do see it on


Linux voyager 2.6.17-10-generic #2 SMP Fri Oct 13 18:45:35 UTC 2006 i686 
GNU/Linux
processor   : 0
vendor_id   : GenuineIntel
model name  : Genuine Intel(R) CPU   T2300  @ 1.66GHz
processor   : 1
vendor_id   : GenuineIntel
model name  : Genuine Intel(R) CPU   T2300  @ 1.66GHz
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx constant_tsc pni monitor 
vmx est tm2 xtpr

Both machines are running Ubuntu Edgy, exact same software versions.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Different results from repeated calculation

2007-01-27 Thread Stefan van der Walt
On Sat, Jan 27, 2007 at 04:00:33PM -0700, Charles R Harris wrote:
 Hmmm, and your problem machine is running smp linux. As is mine; fedora uses
 smp even on single processor machines these days. I think we could use more
 data here comparing

It runs fine on this Ubuntu/Edgy machine, though:

Linux genugtig 2.6.17-10-generic #2 SMP Tue Dec 5 21:16:35 UTC 2006 x86_64 
GNU/Linux
processor   : 0
vendor_id   : AuthenticAMD
model name  : AMD Athlon(tm) 64 X2 Dual Core Processor 4400+
processor   : 1
vendor_id   : AuthenticAMD
model name  : AMD Athlon(tm) 64 X2 Dual Core Processor 4400+
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm 3dnowext 
3dnow up pni lahf_lm cmp_legacy

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Different results from repeated calculation

2007-01-27 Thread Stefan van der Walt
On Sat, Jan 27, 2007 at 03:11:58PM -0700, Charles R Harris wrote:
 So this looks like an error in the LSB of the floating number. Could be
 rounding, could be something not reset quite right. I'm thinking possibly
 hardware at this time, maybe compiler.
 
 Linux fedora 2.6.19-1.2895.fc6 #1 SMP Wed Jan 10 19:28:18 EST 2007 i686 athlon
 i386 GNU/Linux
 
 processor   : 0
 vendor_id   : AuthenticAMD

And just for the hell of it, with 4 CPUs :)

Linux dirac 2.6.17-10-generic #2 SMP Tue Dec 5 21:16:35 UTC 2006 x86_64 
GNU/Linux

processor   : 0
vendor_id   : AuthenticAMD
model name  : Dual Core AMD Opteron(tm) Processor 275

processor   : 1
vendor_id   : AuthenticAMD
model name  : Dual Core AMD Opteron(tm) Processor 275

processor   : 2
vendor_id   : AuthenticAMD
model name  : Dual Core AMD Opteron(tm) Processor 275

processor   : 3
vendor_id   : AuthenticAMD
model name  : Dual Core AMD Opteron(tm) Processor 275

flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm 3dnowext 
3dnow up pni lahf_lm cmp_legacy

Works fine.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] pyrex c_numpy.pyx problem

2007-01-31 Thread Stefan van der Walt
Hi,

Pyrex 0.9.5.1 doesn't like the following snippet out of c_numpyx.pyx:

ctypedef extern class numpy.broadcast [object PyArrayMultiIterObject]:
cdef int numiter
cdef npy_intp size, index
cdef int nd
cdef npy_intp dimensions[NPY_MAXDIMS]
cdef flatiter iters[NPY_MAXDIMS]

which corresponds to

typedef struct {
PyObject_HEAD
int  numiter; /* number of iters */
npy_intp size;/* broadcasted size */
npy_intp index;   /* current index */
int  nd;  /* number of dims */
npy_intp dimensions[NPY_MAXDIMS]; /* dimensions */
PyArrayIterObject*iters[NPY_MAXARGS]; /* iterators */
} PyArrayMultiIterObject;

in ndarrayobject.h.

I changed it to be

ctypedef extern class numpy.broadcast [object PyArrayMultiIterObject]:
cdef int numiter
cdef npy_intp size, index
cdef int nd
cdef npy_intp *dimensions
cdef flatiter **iters

after which it complains

/tmp/pyrex/c_numpy.pxd:99:22: Pointer base type cannot be a Python object

so I changed the last line to

  cdef flatiter iters

That compiles, but it can't be right.

Any advice/suggestions?

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] classmethods for ndarray

2007-02-02 Thread Stefan van der Walt
On Thu, Feb 01, 2007 at 05:51:22PM -0700, Travis Oliphant wrote:
 Christopher Barker wrote:
 
 Travis Oliphant wrote:
   
 
 I'm thinking that we should have several.  For example all the fromXXX 
 functions should probably be classmethods
 
 ndarray.frombuffer
 ndarray.fromfile
 
 
 
 would they still be accessible in their functional form in the numpy 
 namespace?
 
   
 
 Yes, until a major revision at which point they could (if deemed useful) 
 be removed after a deprecation warning period.

That would be a happy day.  I'd love to see the numpy namespace go on
a diet...

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Please help with subclassing numpy.ndarray

2007-02-06 Thread Stefan van der Walt
On Tue, Feb 06, 2007 at 01:06:37PM +0100, Sturla Molden wrote:
 
  def __new__(cls,...)
   ...
  (H, edges) = numpy.histogramdd(..)
  cls.__defaultedges = edges
 
  def __array_finalize__(self, obj):
  if  not hasattr(self, 'edges'):
  self.edges = self.__defaultedges
 
 IMHO, the preferred way to set an instance attribute is to use __init__
 method, which is the 'Pythonic' way to do it.

I don't pretend to know all the inner workings of subclassing, but I
don't think that would work, given the following output:

In [1]: import numpy as N

In [2]: import numpy as N

In [3]: 

In [3]: class MyArray(N.ndarray):
   ...: def __new__(cls,data):
   ...: return N.asarray(data).view(cls)
   ...: 
   ...: def __init__(self,obj):
   ...: print This is where __init__ is called
   ...: 
   ...: def __array_finalize__(self,obj):
   ...: print This is where __array_finalize__ is called
   ...: 

In [4]: x = MyArray(3)
This is where __array_finalize__ is called
This is where __init__ is called

In [5]: y = N.array([1,2,3])

In [6]: x+y
This is where __array_finalize__ is called
Out[6]: MyArray([4, 5, 6])


Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] force column vector

2007-02-07 Thread Stefan van der Walt
On Wed, Feb 07, 2007 at 10:35:14AM +, Christian wrote:
 Hi,
 
 when creating an ndarray from a list, how can I force the result to be
 2d *and* a column vector? So in case I pass a nested list, there will be no
 modification of the shape and when I pass a simple list, it will be 
 converted to a 2d column vector. I can only think of a solution using 'if'
 clauses but I suppose there is a more elegant way.

One way is to sub-class ndarray:

import numpy as N

class ColumnVectorArray(N.ndarray):
def __new__(cls,data):
data = N.asarray(data).view(cls)
if len(data.shape) == 1:
data.shape = (-1,1)
return data

x = ColumnVectorArray([[1,2],[3,4],[5,6]])
print 'x ='
print x
print

y = ColumnVectorArray([1,2,3])
print 'y ='
print y
print

print 'x+y ='
print x+y

which yields:


x =
[[1 2]
 [3 4]
 [5 6]]

y =
[[1]
 [2]
 [3]]

x+y =
[[2 3]
 [5 6]
 [8 9]]


Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] getting indices for array positions

2007-02-07 Thread Stefan van der Walt
On Wed, Feb 07, 2007 at 02:00:52PM +0100, Christian Meesters wrote:
 This questions might seem stupid, but I didn't get a clever solution myself, 
 or found one in the archives, the cookbook, etc. . If I overlooked something, 
 please give a pointer.
 
 Well, if I have an 1D array like
 [ 0. ,  0.1,  0.2,  0.3,  0.4,  0.5]
 ,a scalar like 0.122 and want to retrieve the index postion of the closest 
 value of the scalar in the array: Is there any fast method to get
 this?

If I understand correctly:

data = N.array([ 0. ,  0.1,  0.2,  0.3,  0.4,  0.5])
diff = N.abs(data - val)
print N.argmin(diff)

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fromstring, tostring slow?

2007-02-13 Thread Stefan van der Walt
On Tue, Feb 13, 2007 at 11:42:35AM -0800, Mark Janikas wrote:
 I am finding that directly packing numpy arrays into binary using the tostring
 and fromstring methods do not provide a speed improvement over writing the 
 same
 arrays to ascii files.  Obviously, the size of the resulting files is far
 smaller, but I was hoping to get an improvement in the speed of writing.  I 
 got
 that speed improvement using the struct module directly, or by using generic
 python arrays.  Let me further describe my methodological issue as it may
 directly relate to any solution you might have.

Hi Mark

Can you post a benchmark code snippet to demonstrate your results?
Here, using 1.0.2.dev3545, I see:

In [26]: x = N.random.random(100)

In [27]: timeit f = file('/tmp/blah.dat','w'); f.write(str(x))
100 loops, best of 3: 1.77 ms per loop

In [28]: timeit f = file('/tmp/blah','w'); x.tofile(f)
1 loops, best of 3: 100 µs per loop

(I see the same results for heterogeneous arrays)

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fromstring, tostring slow?

2007-02-13 Thread Stefan van der Walt
On Tue, Feb 13, 2007 at 04:02:10PM -0800, Mark Janikas wrote:
 Yes, but does the code have the same license as NumPy?  As I work
 for a software company, where I help with the scripting interface, I
 must make sure everything I use is cited and has the appropriate
 license.  

Yes, Scipy and Numpy are both released under BSD licenses.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Greek Letters

2007-02-20 Thread Stefan van der Walt
On Tue, Feb 20, 2007 at 05:29:25PM -0800, Mark Janikas wrote:
 Oh.  I am using CygWin, and the website I just went to:
 
 http://www.cygwin.com/faq/faq_3.html
 
 
 stated that:  The short answer is that Cygwin is not Unicode-aware
 
 Not sure if this is going to apply to python in general, but I
 suspect it will.  Ugh, I dislike Windows a lot, but it pays the bills.

Actually you pay Bill :)

Maybe try the free vmplayer with a linux session?

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dumb question about creating a complex array

2007-02-22 Thread Stefan van der Walt
Hi Matthew

On Thu, Feb 22, 2007 at 02:50:16PM -0800, Mathew Yeates wrote:
 given an array of floats, 2N columns and M rows, where the elements 
 A[r,2*j] and A[r,2*j+1] form the real and imaginary parts of a complex 
 number ... What is the simplest way to create a complex array? It's 
 a fairly large array so I want to keep copying to a minimum.
 
 (Actually, it's not a float array, its elements are byte sized, in case 
 that matters)

That's a bit problematic, since numpy doesn't have Complex16.  If you
had two 32-bit floats, you could do:

In [14]: x = N.array([10,10,20,20],dtype=N.float32)

In [15]: x.view(N.complex64)
Out[15]: array([ 10.+10.j,  20.+20.j], dtype=complex64)

As things stand, you may have to copy:

In [16]: x = N.array([10,10,20,20],dtype=N.uint8)

In [19]: x.astype(N.float32).view(N.complex64)
Out[19]: array([ 10.+10.j,  20.+20.j], dtype=complex64)

Maybe someone else comes up with a better answer.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dumb question about creating a complex array

2007-02-22 Thread Stefan van der Walt
On Thu, Feb 22, 2007 at 02:50:16PM -0800, Mathew Yeates wrote:
 given an array of floats, 2N columns and M rows, where the elements 
 A[r,2*j] and A[r,2*j+1] form the real and imaginary parts of a complex 
 number ... What is the simplest way to create a complex array? It's 
 a fairly large array so I want to keep copying to a minimum.
 
 (Actually, it's not a float array, its elements are byte sized, in case 
 that matters)

One other solution may be to use record arrays:

In [40]: x = N.array([[10,10,20,20],[20,20,30,30]],N.uint8)

In [41]: y = x.view([('real',N.uint8),('imag',N.uint8)]).reshape(x.shape[0],-1)

In [42]: y['real']
Out[42]: 
array([[10, 20],
   [20, 30]], dtype=uint8)

In [43]: y['imag']
Out[43]: 
array([[10, 20],
   [20, 30]], dtype=uint8)

Then you can at least do some arithmetic manually without any copying
of data taking place.

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nd_image.affine_transform edge effects

2007-03-14 Thread Stefan van der Walt
Hi James

On Fri, Mar 09, 2007 at 08:44:34PM -0300, James Turner wrote:
 Last year I wrote a program that uses the affine_transform()
 function in numarray to resample and co-add datacubes with WCS
 offsets in 3D. This function makes it relatively easy to handle
 N-D offsets and rotations with a smooth interpolant, which is
 exactly what I wanted. However, I am finding that the results
 contain edge effects which translate into artefacts in the
 middle of the final mosaic.

Is this related to

http://projects.scipy.org/scipy/scipy/ticket/213

in any way?

Code snippets to illustrate the problem would be welcome.

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] zoom FFT with numpy?

2007-03-16 Thread Stefan van der Walt
Here is another implementation of the Chirp-Z transform in Python,
that handles complex numbers.

Regards
Stéfan

On Thu, Mar 15, 2007 at 11:53:49AM +0200, Nadav Horesh wrote:
 A long time ago I translated a free code of chirp z transform (zoom fft) into
 python.
 Attached here the source and two translations (probably one of the is right)
 
   Nadav.
 
 On Wed, 2007-03-14 at 14:02 -0800, Ray S wrote:
 
 We'd like to do what most call a zoom FFT; we only are interested
 in the frequencies of say, 6kHZ to 9kHz with a given N, and so the
 computations from DC to 6kHz are wasted CPU time.
 Can this be done without additional numpy pre-filtering computations?
Chirp z-Transform.

As described in

Rabiner, L.R., R.W. Schafer and C.M. Rader.
The Chirp z-Transform Algorithm.
IEEE Transactions on Audio and Electroacoustics, AU-17(2):86--92, 1969


import numpy as np

def chirpz(x,A,W,M):
Compute the chirp z-transform.

The discrete z-transform,

X(z) = \sum_{n=0}^{N-1} x_n z^{-n}

is calculated at M points,

z_k = AW^-k, k = 0,1,...,M-1

for A and W complex, which gives

X(z_k) = \sum_{n=0}^{N-1} x_n z_k^{-n}


A = np.complex(A)
W = np.complex(W)
if np.issubdtype(np.complex,x.dtype) or np.issubdtype(np.float,x.dtype):
dtype = x.dtype
else:
dtype = float

x = np.asarray(x,dtype=np.complex)

N = x.size
L = int(2**np.ceil(np.log2(M+N-1)))

n = np.arange(N,dtype=float)
y = np.power(A,-n) * np.power(W,n**2 / 2.) * x 
Y = np.fft.fft(y,L)

v = np.zeros(L,dtype=np.complex)
v[:M] = np.power(W,-n[:M]**2/2.)
v[L-N+1:] = np.power(W,-n[N-1:0:-1]**2/2.)
V = np.fft.fft(v)

g = np.fft.ifft(V*Y)[:M]
k = np.arange(M)
g *= np.power(W,k**2 / 2.)

return g
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Little module to get numpy examples

2007-03-21 Thread Stefan van der Walt
On Wed, Mar 21, 2007 at 02:59:06PM -0500, eric jones wrote:
 Just looked at this...  Now that is just cool. 
 
 I'd say it should be part of Numpy.

Very useful!  A file cache would be handy, and can be implemented
using the checksum of the page from

http://www.scipy.org/Numpy_Example_List?action=infogeneral=1

Is the goal to eventually merge these docs into numpy, or to keep them
separate it can easily be updated by anyone?  I'm somewhat
apprehensive of having code in numpy that displays uncensored
information directly from the wiki.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nd_image.affine_transform edge effects

2007-03-22 Thread Stefan van der Walt
On Thu, Mar 22, 2007 at 02:41:52PM -0400, Anne Archibald wrote:
 On 22/03/07, James Turner [EMAIL PROTECTED] wrote:
 
  So, its not really a bug, its a undesired feature...
 
 It is curable, though painful - you can pad the image out, given an
 estimate of the size of the window. Yes, this sucks.

I would rather opt for changing the spline fitting algorithm than for
padding with zeros.

Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] concatenating 1-D arrays to 2D

2007-03-22 Thread Stefan van der Walt
On Thu, Mar 22, 2007 at 08:13:22PM -0400, Brian Blais wrote:
 Hello,
 
 I'd like to concatenate a couple of 1D arrays to make it a 2D array, with two 
 columns
 (one for each of the original 1D arrays).  I thought this would work:
 
 
 In [47]:a=arange(0,10,1)
 
 In [48]:a
 Out[48]:array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
 
 In [49]:b=arange(-10,0,1)
 
 In [51]:b
 Out[51]:array([-10,  -9,  -8,  -7,  -6,  -5,  -4,  -3,  -2,  -1])
 
 In [54]:concatenate((a,b))
 Out[54]:
 array([  0,   1,   2,   3,   4,   5,   6,   7,   8,   9, -10,  -9,  -8,
  -7,  -6,  -5,  -4,  -3,  -2,  -1])
 
 In [55]:concatenate((a,b),axis=1)
 Out[55]:
 array([  0,   1,   2,   3,   4,   5,   6,   7,   8,   9, -10,  -9,  -8,
  -7,  -6,  -5,  -4,  -3,  -2,  -1])
 
 
 but it never expands the dimensions.  Do I have to do this...
 
 In [65]:concatenate((a.reshape(10,1),b.reshape(10,1)),axis=1)
 Out[65]:
 array([[  0, -10],
 [  1,  -9],
 [  2,  -8],
 [  3,  -7],
 [  4,  -6],
 [  5,  -5],
 [  6,  -4],
 [  7,  -3],
 [  8,  -2],
 [  9,  -1]])
 
 
 ?
 
 I thought there would be an easier way.  Did I overlook something?

How about

N.vstack((a,b)).T

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nd_image.affine_transform edge effects

2007-03-22 Thread Stefan van der Walt
On Thu, Mar 22, 2007 at 04:33:53PM -0700, Travis Oliphant wrote:
 I would rather opt for changing the spline fitting algorithm than for
 padding with zeros.
   
 
  From what I understand, the splines used in ndimage have the implicit 
 mirror-symmetric boundary condition which also allows them to be 
 computed rapidly.  There may be ways to adapt other boundary conditions 
 and maintain rapid evaluation, but it is not trivial as far as I know.  
 Standard spline-fitting allows multiple boundary conditions because 
 matrix inversion is used.  I think the spline-fitting done in ndimage 
 relies on equal-spacing and mirror-symmetry to allow simple IIR filters 
 to be used to compute the spline coefficients very rapidly.

Thanks, Travis.  I wasn't aware of these restrictions.

Would it be possible to call fitpack to do the spline fitting?  I
noticed that it doesn't exhibit the same mirror-property:

In [24]: z = scipy.interpolate.splrep([0,1,2,3,4],[0,4,3,2,1])

In [25]: scipy.interpolate.splev([0,1,2,3,4,5],z)
Out[25]: 
array([ -1.32724622e-16,   4.e+00,   3.e+00,
 2.e+00,   1.e+00,  -1.2500e+00])

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] concatenating 1-D arrays to 2D

2007-03-23 Thread Stefan van der Walt
On Fri, Mar 23, 2007 at 11:09:03AM -0400, Robert Pyle wrote:
  In [65]:concatenate((a.reshape(10,1),b.reshape(10,1)),axis=1)
  Out[65]:
  array([[  0, -10],
  [  1,  -9],
  [  2,  -8],
  [  3,  -7],
  [  4,  -6],
  [  5,  -5],
  [  6,  -4],
  [  7,  -3],
  [  8,  -2],
  [  9,  -1]])
 
 
  ?
 
  I thought there would be an easier way.  Did I overlook something?
 
 What's wrong with zip? Or did *I* miss the point?  (I'm just getting  
 the hang of numpy.)

If you use 'zip' you don't make use of numpy's fast array mechanisms.
I attach some code you can run as a benchmark.  From my ipython
session:

In [1]: run vsbench.py

In [2]: timeit using_vstack(x,y)
1000 loops, best of 3: 997 µs per loop

In [3]: timeit using_zip(x,y)
10 loops, best of 3: 503 ms per loop

In [4]: timeit using_custom_iteration(x,y)
1000 loops, best of 3: 1.64 ms per loop

Cheers
Stéfan
import numpy as N

x = N.random.random(10)
y = N.random.random(10)

def using_vstack(*args):
return N.vstack(args).T

def using_zip(*args):
return N.array(zip(*args))

def using_custom_iteration(*args):
x = N.empty((len(args[0]),len(args)))
for i,vals in enumerate(args):
x[:,i] = vals
return x

check = N.vstack((x,y)).T
assert N.all(using_vstack(x,y) == check)
assert N.all(using_zip(x,y) == check)
assert N.all(using_custom_iteration(x,y) == check)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nd_image.affine_transform edge effects

2007-03-24 Thread Stefan van der Walt
On Sat, Mar 24, 2007 at 01:41:21AM -0400, James Turner wrote:
 That's hard to say. Just because it's mainly a continuous-tone image
 doesn't necessarily mean it is well sampled everywhere. This depends
 both on the subject and the camera optics. Unlike the data I usually
 work with, I think everyday digital photographs (probably a photo
 scan in the case of Lena) do not generally have the detector sampling
 frequency matched to the optical resolution of the image. If that's
 true, the presence of aliasing in interpolated images depends on the
 structure of the subject and whether the scene has edges or high-
 frequency patterns in it.

Agreed, but the aliasing effects isn't not the problem here, as it
should be visible in the input image as well.  I'd expect a
third-order spline interpolation to be more smooth than a first-order
interpolant, but in the resulting images this isn't the case.

See

http://mentat.za.net/results/lena_small.png
http://mentat.za.net/results/img_rot_30_1.png (1st order spline)
http://mentat.za.net/results/img_rot_30_3.png (3rd order spline)

 Lena has been decimated (reduced in size) prior to the rotation. That
 is definitely a good way to get artefacts, unless an anti-aliasing
 filter is applied before shrinking the image. My impression is that
 this image is probably somewhat undersampled (to understand exactly
 what that means, read up on the Sampling Theorem).

The artefacts arn't visible in the source image (url above).  The
image definately is a scaled down version of the original Lena -- very
interesting, btw, see

http://www.cs.cmu.edu/~chuck/lennapg/lenna.shtml

 investigating it further. One experiment might be to blur the original
 Lena with a Gaussian whose sigma is 1 pixel of the shrunken image
 before actually shrinking her, then do the rotation.

A rotation should take place without significant shifts in colour.
This almost looks like a value overflow problem.

 So I do wonder if the algorithm in nd_image is making this worse
 than it needs to be.

That is my suspicion, too.

 compare the results? I just tried doing a similar rotation in PyRAF on
 a monochrome image with a bicubic spline, and see considerably smaller
 artefacts (just a compact overshoot of probably a few % at the
 edge).

Could you apply the PyRAF rotation on the Lena given above and post
the result?

I always thought we could simply revert to using bilinear and bicubic
polygon interpolation (instead of spline interpolation), but now I
read on wikipedia:


In the mathematical subfield of numerical analysis, spline
interpolation is a form of interpolation where the interpolant is a
special type of piecewise polynomial called a spline. Spline
interpolation is preferred over polynomial interpolation because the
interpolation error can be made small even when using low degree
polynomials for the spline. Thus, spline interpolation avoids the
problem of Runge's phenomenon which occurs when using high degree
polynomials.


http://en.wikipedia.org/wiki/Spline_interpolation

also take a look at

http://en.wikipedia.org/wiki/Runge%27s_phenomenon

So much for side-stepping Runge's phenomenon :)

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nd_image.affine_transform edge effects

2007-03-24 Thread Stefan van der Walt
On Sat, Mar 24, 2007 at 03:25:38PM -0700, Zachary Pincus wrote:
 If Lena is converted to floating-point before the rotation is  
 applied, and then the intensity range is clipped to [0,255] and  
 converted back to uint8 before saving, everything looks fine.

Thanks, Zachary!  I can confirm that.

 So, is this a bug? Well, I still think so. Given that small ringing  
 is going to happen on all but the very smoothest images, and given  
 that ndimage is going to be used on non-floating-point types, it  
 would be good if there were some explicit internal clipping to the  
 data type's range. Otherwise, the ndimage resampling tools are unfit  
 for use on non-floating-point data that resides near the edges of the  
 range of the data type.

I agree.

 Though I'm not quite sure how one would structure the calculations so  
 that it would be possible to tell when over/underflow happened... it  
 might not be possible. In which case, either the tools should use  
 floating-point math at some of the steps internally (as few as  
 possible) before clipping and converting to the required data type,  
 or explicit warnings should be added to the documentation.

I think the spline interpolation already uses floating point math?
Looks like we are seeing a type conversion without range checking:

In [47]: x = N.array([1.2,200,255,255.6,270])

In [48]: x.astype(N.uint8)
Out[48]: array([  1, 200, 255, 255,  14], dtype=uint8)

I'll have to go and take a look at the code, but this shouldn't be
hard to fix -- clipping is fairly fast (don't we have a fast clipping
method by David C now?), even for extremely large images.

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype confusion

2007-03-25 Thread Stefan van der Walt
On Sun, Mar 25, 2007 at 04:09:11AM -0700, Jan Strube wrote:
 There seems to be a fundamental lack of understanding on my behalf when it
 comes to dtypes and record arrays.
 Please consider the following snippet:
 
 import numpy as N
 newtype = N.dtype([('x', N.float64 ), ('y', N.float64), ('z', N.float64)])
 a = N.random.random((100,3))
 a.dtype=newtype
 b = N.column_stack([a['x'].ravel(), a['y'].ravel(), a['z'].ravel()])
 b.dtype = newtype
 -- ValueError: new type not compatible with array.
 
 I don't understand two things about this:
 i) the shape of a changes from (100,3) to (100,1) after assigning
 the dtype.

Every group of three floats now become one element in the new array,
i.e.

float64 - (float64,float64,float64)

 ii) the shape of b is obviously (100,3), so why can't I assign the
 new dtype?

The array is no longer a contiguous block of memory, so the new dtype
can't be applied:

In [23]: b.flags
Out[23]: 
  C_CONTIGUOUS : False
  F_CONTIGUOUS : True
  OWNDATA : False
  WRITEABLE : True
  ALIGNED : True
  UPDATEIFCOPY : False

The following does a copy of the array to contiguous memory:

In [24]: N.ascontiguousarray(b).dtype = newtype

If you want to move back to the original view you can do

b.view(N.float64)

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fixed scalar coercion model in NumPy

2007-03-26 Thread Stefan van der Walt
On Mon, Mar 26, 2007 at 01:52:28PM -0700, Travis Oliphant wrote:
 I really do want to get 1.0.2 out the door soon.  What still needs to be 
 fixed before then?

The code in ticket 469 still causes a memory error, so that might be
worth fixing.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] two previously unresolved issues

2007-03-26 Thread Stefan van der Walt
Hi,

I just went through my mail archive and found these two minor
outstanding issues.  Thought I'd ask for comments before the new
release:


From: Charles R Harris [EMAIL PROTECTED]
Subject: Re: [Numpy-discussion] Assign NaN, get zero

On 11/11/06, Lisandro Dalcin [EMAIL PROTECTED] wrote:

 On 11/11/06, Stefan van der Walt [EMAIL PROTECTED] wrote:
  NaN (or inf) is a floating point number, so seeing a zero in integer
  representation seems correct:
 
  In [2]: int(N.nan)
  Out[2]: 0L
 

 Just to learn myself: Why int(N.nan) should be 0? Is it C behavior?

In [1]: int32(0)/int32(0)
Warning: divide by zero encountered in long_scalars
Out[1]: 0

In [2]: float32(0)/float32(0)
Out[2]: nan

In [3]: int(nan)
Out[3]: 0L

I think it was just a default for numpy. Hmmm, numpy now warns on integer
division by zero, didn't used to.  Looks like a warning should also be
raised when casting nan to integer. It is probably a small bug not to. I
also suspect int(nan) should return a normal python zero, not 0L.




From: Bill Baxter [EMAIL PROTECTED]
To: numpy-discussion@scipy.org
Subject: [Numpy-discussion] linalg.lstsq for complex

Is this code from linalg.lstsq for the complex case correct?

   lapack_routine = lapack_lite.zgelsd
lwork = 1
rwork = zeros((lwork,), real_t)
work = zeros((lwork,),t)
results = lapack_routine(m, n, n_rhs, a, m, bstar, ldb, s, rcond,
 0, work, -1, rwork, iwork, 0)
lwork = int(abs(work[0]))
rwork = zeros((lwork,),real_t)
a_real = zeros((m,n),real_t)
bstar_real = zeros((ldb,n_rhs,),real_t)
results = lapack_lite.dgelsd(m, n, n_rhs, a_real, m,
 bstar_real, ldb, s, rcond,
 0, rwork, -1, iwork, 0)
lrwork = int(rwork[0])
work = zeros((lwork,), t)
rwork = zeros((lrwork,), real_t)
results = lapack_routine(m, n, n_rhs, a, m, bstar, ldb, s, rcond,

The middle call to dgelsd looks unnecessary to me.  At the very least,
allocating astar_real and bstar_real shouldn't be necessary since they
aren't referenced anywhere else in the lstsq function.  The lapack
documentation for zgelsd also doesn't mention any need to call dgelsd
to compute the size of the work array.



Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nd_image.affine_transform edge effects

2007-03-28 Thread Stefan van der Walt
Hi,

I notice now that we've been having this discussion on the wrong list
-- oops!  We're nearly done, though.

On Mon, Mar 26, 2007 at 04:16:51AM -0400, James Turner wrote:
 For what it's worth, I'd agree with both of you that the numeric
 overflow should be documented if not fixed. It sounds like Stefan has
 figured out a solution for it though. If you make sense of the code in
 ni_interpolation.c, Stefan, I'd be very interested in how to make it
 calculate one less value at the edges :-).

I fixed the overflow issue in SVN.  I'd appreciate it if you could
test and let me know whether everything works as expected.  The output
values are now simply clipped to the bounds of the numeric type,
i.e. UInt8 to [0,255] etc.

As for the values at the edges, I'm still working on it.

Since we have a fundamental problem with the spline filter approach at
the borders, we may also want to look at subdivision interpolation
schemes, like the one described in this article:

http://citeseer.ist.psu.edu/devilliers03dubucdeslauriers.html

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nd_image.affine_transform edge effects

2007-03-28 Thread Stefan van der Walt
On Wed, Mar 28, 2007 at 05:14:59PM +0200, Stefan van der Walt wrote:
 As for the values at the edges, I'm still working on it.

OK, that was a one-line patch.  Please test to see if there are any
subtle conditions on the border that I may have missed.  I know of one
already, but I'd be glad if you can find any others :)

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix indexing question (final post)

2007-03-28 Thread Stefan van der Walt
On Wed, Mar 28, 2007 at 07:05:00PM -0500, Alan Isaac wrote:
 On Wed, 28 Mar 2007, Stefan van der Walt wrote: 
  Matrices strike me as a bit of an anomaly. I would expect 
  an N-dimensional container to contain (N-1)-dimensional 
  objects. 
 
 Yes indeed.

Doesn't seem to be the way the matrix world works though:

octave:2 x = zeros(3,3,5);
octave:3 size(x)
ans =

  3  3  5

octave:4 size(x(:,:,1))
ans =

  3  3

octave:5 size(x(:,1,1))
ans =

  3  1

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 2D Arrays column operations

2007-03-29 Thread Stefan van der Walt
On Thu, Mar 29, 2007 at 07:43:03PM -, Simon Berube wrote:
 Hi,
I am relatively new to Python/NumPy switching over from Matlab and
 while porting some of my matlab code for practice I ran into the
 following problem.
 
 Assume we have a 2D Matrix such that
 a = array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
 
 If I want the second row I can simply enough take
 
 c = a[1]
 
 However, I would like to do a similar operation on the columns of the
 2D Array. In matlab I could simply do
 
 c = a(:,2) to get the values array([2,5,8])
 
 In numPy this seems to not be a valid operation. I understand that

Not?

In [2]: a = array([[1, 2, 3],
   ...:[4, 5, 6],
   ...:[7, 8, 9]])

In [3]: a[:,1]
Out[3]: array([2, 5, 8])

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] test_multiarray.test_clip fails on Solaris 8 system

2007-04-01 Thread Stefan van der Walt
Hi Chris

Would you please run the following commands and show their output?

import sys
print sys.byteorder

import numpy as N
print N.array([1,2,3],N.dtype(N.int16).newbyteorder('')).dtype.byteorder
print N.array([1,2,3],N.dtype(N.int16).newbyteorder('')).dtype.byteorder
print N.array([1,2,3],N.dtype(N.int16).newbyteorder('=')).dtype.byteorder

Output on my little-endian system is

little


=

and I'd be curious to see if the output on a big-endian system follows
the same pattern.

I'd expect

big


=

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nd_image.affine_transform edge effects

2007-04-05 Thread Stefan van der Walt
Hi James

On Wed, Apr 04, 2007 at 08:29:50PM -0400, James Turner wrote:
   It looks like the last output value is produced by reflecting the
   input and then interpolating, but presumably then the first value
   should be 3.9, for consistency, not 3.1? Does that make sense?
 
 Aargh. I think I see what's happening now. The input is supposed to
 be interpolated and then reflected like this:
 
   [4  3  2  1]  -  [3.1  3.1  2.1  1.1  1.1]
 
 The problem is that there is still one value too many being
 interpolated here before the reflection takes place. Do the
 sections beginning at lines 168  178 need changing in a similar way
 to their counterparts at lines 129  139? I started looking into
 this, but I don't understand the code well enough to be sure I'm
 making the right changes...

Thanks for spotting that.  When I fix those lines, I see:

[[ 3.901   3.099   2.099   1.1002  1.8998  2.901 ]
 [ 3.901   3.099   2.099   1.1002  1.8998  2.901 ]]

I'll submit to SVN later today.  Note that I also enabled 'mirror'
mode, which works almost the same way as reflect:

Reflect: 1 2 3 4 - 1 2 3 4 4 3 2 1
Mirror:  1 2 3 4 - 1 2 3 4 3 2 1

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] newbie question - large dataset

2007-04-07 Thread Stefan van der Walt
On Sat, Apr 07, 2007 at 02:48:47PM -0400, Anne Archibald wrote:
 If none of those algorithmic improvements are possible, you can look
 at other possibilities for speeding things up (though the speedups
 will be modest). Parallelism is an obvious one - if you've got a
 multicore machine you may be able to cut your processing time by a
 factor of the number of cores you have available with minimal effort
 (for example by replacing a for loop with a simple foreach,
 implemented as in the attached file).

Would this code speed things up under Python?  I was under the
impression that there is only one process, irrespective of whether or
not threads are used, and that the global interpreter lock is used
when swapping between threads to make sure that only one executes at
any instance in time.

If my above understanding is correct, it would be better to use a
multi-process engine like IPython1.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] detecting shared data

2007-04-11 Thread Stefan van der Walt
On Wed, Apr 11, 2007 at 06:12:16PM -0400, Matthew Koichi Grimes wrote:
 Is there any way to detect whether one array is a view into another 
 array? I'd like something like:
 
   arr = N.arange(5)
   subarr = arr[1:3]
   sharesdata(arr, subarr)
 True

Your best bet is probably

N.may_share_memory(arr,subarr)

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] detecting shared data

2007-04-12 Thread Stefan van der Walt
On Wed, Apr 11, 2007 at 11:06:13PM -0400, Anne Archibald wrote:
 On 11/04/07, Bill Baxter [EMAIL PROTECTED] wrote:
 
 Must be pretty recent.  I'm using 1.0.2.dev3520 (enthought egg) and
 the function's not there.
 
 It is.
 
 I've never been quite happy with it, though; I realize it's not very
 feasible to write one that efficiently checks all possible overlaps,
 but the current one claims (e.g.) a.real and a.imag may share memory,
 which irks me. I put in a quick fix. Also, may_share_memory did not
 have any tests (for shame!), so I put some of those in too. I wasn't
 sure how to express expected failure, and they're not the most
 thorough, but they should help.

Thank you for taking the time to write those tests!

Failures may be expressed using

NumpyTestCase.failIf(self, expr, msg=None)

Regards
Stéfan
___
Numpy-discussion mailing list
[EMAIL PROTECTED]
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] detecting shared data

2007-04-12 Thread Stefan van der Walt
On Thu, Apr 12, 2007 at 11:29:31AM -0400, Anne Archibald wrote:
  Failures may be expressed using
 
  NumpyTestCase.failIf(self, expr, msg=None)
 
 That's not quite what I mean. There are situations, with the current
 code, that it gets the answer wrong (i.e., claims arrays may share
 memory when they don't). I know, and it's okay, and if it doesn't
 there's a bug, but in view of possible future enhancements, I don't
 want to signal an actual failure if it starts working. I do want to
 test it though, so I was hoping there was a way to express I expect
 this test to fail, notify me if it doesn't, but don't call it a
 failure if it starts working.

If the test is supposed to pass but currently fails, we can always add
it using level=50 or so.  That way, most people who run the tests will
not see the failure, but the devs may still choose to run them.  We
could even display a warning when running the test suite with such a
high level.

Would such a scheme cause problems for anyone?  An alternative would
be to rework the test gatherer to filter tests based on some flag.

Cheers
Stéfan
___
Numpy-discussion mailing list
[EMAIL PROTECTED]
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ScipyTest Warning?

2007-04-24 Thread Stefan van der Walt
Hi Mark

On Tue, Apr 24, 2007 at 07:28:35AM -, mark wrote:
 I have a piece of code that works fine for me, but a friend tries to
 run it and gets this warning.
 He claims to have updated his Python (2.4), Scipy and numpy. A
 Does anybody know what import triggers this Warning? I didn't think I
 imported ScipyTest.

This has been fixed in the latest versions of numpy/scipy.  There is
no need to be concerned, though, since it has no impact on the working
of your code.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy endian question

2007-04-27 Thread Stefan van der Walt
On Thu, Apr 26, 2007 at 05:22:42PM -0400, Christopher Hanley wrote:
 This should work as a consistent test for bigendian:
 
 - isBigEndian = (obj.dtype.str[0] == '')

Is this test always safe, even on big endian machines?  Couldn't the
dtype.str[0] sometimes be '='?

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] simpliest way to check: array x is float, not integer

2007-05-01 Thread Stefan van der Walt
On Tue, May 01, 2007 at 12:05:20PM -, Simon Berube wrote:
 Alternatively, as a hackjob type check you could also do an
 isinstance check on the first element of the array since, unlike
 lists, arrays have uniform elements all the way through.

Or use

N.issubdtype(x.dtype,int)  and  N.issubdtype(x.dtype,float)

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ctypes TypeError. What I am doing wrong?

2007-05-01 Thread Stefan van der Walt
On Wed, May 02, 2007 at 12:38:12AM +0200, Guillem Borrell i Nogueras wrote:
 lapack.argtypes=[c_int,c_int,
  ndpointer(dtype=float64,
ndim=2,
flags='FORTRAN'),
  c_int,c_int,
  ndpointer(dtype=float64,
ndim=1,
flags='FORTRAN'),
  c_int,c_int]

This also isn't correct, according to the dgesv documentation.  It
should be

lapack.dgesv_.argtypes=[POINTER(c_int),POINTER(c_int),
ndpointer(dtype=np.float64,
ndim=2,
flags='FORTRAN'),
POINTER(c_int), POINTER(c_int),
 
ndpointer(dtype=np.float64,
  ndim=2,
  flags='FORTRAN'),
POINTER(c_int),POINTER(c_int)]

I attach a working version of your script.

Cheers
Stéfan
from ctypes import c_int, POINTER
import numpy as np
from numpy.ctypeslib import load_library,ndpointer

def dgesv(N,A,B):
A = np.asfortranarray(A.astype(np.float64))
B = np.asfortranarray(B.astype(np.float64))

cN=c_int(N)
NRHS=c_int(1)
LDA=c_int(N)
IPIV=(c_int * N)()
LDB=c_int(N)
INFO=c_int(1)

lapack=load_library('liblapack.so','/usr/lib/')

lapack.dgesv_.argtypes=[POINTER(c_int),POINTER(c_int),
ndpointer(dtype=np.float64,
ndim=2,
flags='FORTRAN'),
POINTER(c_int), POINTER(c_int),
 
ndpointer(dtype=np.float64,
  ndim=2,
  flags='FORTRAN'),
POINTER(c_int),POINTER(c_int)]
 
 
lapack.dgesv_(cN,NRHS,A,LDA,IPIV,B,LDB,INFO)
return B

print dgesv(2,np.array([[1,2],[3,4]]),np.array([[1,2]]))
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Difference in the number of elements in a fromfile() between Windows and Linux

2007-05-04 Thread Stefan van der Walt
Hi Matthieu

On Fri, May 04, 2007 at 09:16:34AM +0200, Matthieu Brucher wrote:
 I'm trying to test my code on several platforms, Windows and Linux, and I'm
 using some data files that where saved with a tofile(sep=' ') under Linux.
 Those files can be loaded without a problem under Linux, but under Windows 
 with
 the latest numpy, these data cannot be loaded, some numbers are not considered
 - eg +inf or -inf).
 Is this a known behaviour ? How could I load these correctly under both
 platforms (I don't want to save them in binary form, I'm using the files for
 other purpose -

Please file a ticket at

http://projects.scipy.org/scipy/numpy/newticket

along with a short code snippet to reproduce the problem.  That way we
won't forget about it.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Difference in the number of elements in a fromfile() between Windows and Linux

2007-05-04 Thread Stefan van der Walt
On Fri, May 04, 2007 at 09:44:02AM -0700, Christopher Barker wrote:
 Matthieu Brucher wrote:
  Example of the first line of my data file :
  0.0 inf 13.9040914426 14.7406669444 inf 4.41783247603 inf inf 
  6.05071515635 inf inf inf 15.6925185021 inf inf inf inf inf inf inf
 
 I'm pretty sure fromfile() is using the standard C fscanf(). That means 
 that whether in understands inf depends on the C lib. I'm guessing 
 that the MS libc doesn't understand the same spelling of inf that the 
 gcc one does. There may indeed be no literal for the IEEE Inf.

It would be interesting to see how Inf and NaN (vs. inf and nan) are
interpreted under Windows.

Are there any free fscanf implementations out there that we can
include with numpy?

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] howto make from flat array (1-dim) 2-dimensional?

2007-05-13 Thread Stefan van der Walt
On Sun, May 13, 2007 at 07:46:47AM -0400, Darren Dale wrote:
 On Sunday 13 May 2007 7:36:39 am dmitrey wrote:
  i.e. for example from flat array [1, 2, 3] obtain
  array([[ 1.],
 [ 2.],
 [ 3.]])
 
 a=array([1,2,3])
 a.shape=(len(a),1)

Or just

a.shape = (-1,1)

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.0.3 release next week

2007-05-13 Thread Stefan van der Walt
On Sun, May 13, 2007 at 06:19:30PM +0300, dmitrey wrote:
 Is it possible somehow to speedup numpy 1.0.3 appearing in Linux update 
 channels? (as for me I'm interested in Ubuntu/Kubuntu,  currently there 
 is v 1.0.1)
 I tried to compile numpy 1.0.2, but, as well as in Octave compiling, it 
 failed because c compiler can't create executable. gcc reinstallation 
 didn't help, other c compilers are absent in update channel (I had seen 
 only tcc, but I'm sure it will not help, and it (I mean trying to 
 install other C compilers) requires too much efforts).

Many people here are compiling numpy fine under Ubuntu.  Do you have
write permissions to the output directory? What is the compiler error
given?

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] dtype hashes are not equal

2007-05-13 Thread Stefan van der Walt
Hi all,

In the numpy.sctypes dictionary, there are two entries for uint32:

In [2]: N.sctypes['uint']
Out[2]: 
[type 'numpy.uint8',
 type 'numpy.uint16',
 type 'numpy.uint32',
 type 'numpy.uint32',
 type 'numpy.uint64']

Comparing the dtypes of the two types gives the correct answer:

In [3]: sc = N.sctypes['uint']

In [4]: N.dtype(sc[2]) == N.dtype(sc[3])
Out[4]: True

But the hash values for the dtypes (and the types) differ:

In [42]: for T in N.sctypes['uint']:
dt = N.dtype(T)
print T, dt
print '=', hash(T), hash(dt)

type 'numpy.uint8' uint8
= -1217082432 -1217078592
type 'numpy.uint16' uint16
= -1217082240 -1217078464
type 'numpy.uint32' uint32
= -1217081856 -1217078336
type 'numpy.uint32' uint32
= -1217082048 -1217078400
type 'numpy.uint64' uint64
= -1217081664 -1217078208

Is this expected/correct behaviour?

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.0.3 release next week

2007-05-13 Thread Stefan van der Walt
Hi Dmitrey

On Sun, May 13, 2007 at 08:21:15PM +0300, dmitrey wrote:
  Many people here are compiling numpy fine under Ubuntu.  Do you have
  write permissions to the output directory? What is the compiler error
  given?

 Sorry, I meant compiling Python2.5 and Octave, not numpy  Octave
 Python2.5 is already present (in Ubuntu 7.04), but I tried to compile 
 and install it from sources because numpy compilation failed with
 (I have gcc version 4.1.2 (Ubuntu 4.1.2-0ubuntu4), compiling as
 root)

This isn't really the place to discuss compiling Python or Octave, but
a good first move would be to install the 'build-essential' package.
This will hopefully provide the header files and the compiler you
need.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy array sharing between processes? (and ctypes)

2007-05-14 Thread Stefan van der Walt
On Mon, May 14, 2007 at 11:44:11AM -0700, Ray S wrote:
 While investigating ctypes and numpy for sharing, I saw that the 
 example on
 http://www.scipy.org/Cookbook/Ctypes#head-7def99d882618b52956c6334e08e085e297cb0c6
 does not quite work. However, with numpy.version.version=='1.0b1', 
 ActivePython 2.4.3 Build 12:

That page should probably be replaced by

http://www.scipy.org/Cookbook/Ctypes2

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Question about flags of fancy indexed array

2007-05-23 Thread Stefan van der Walt
On Wed, May 23, 2007 at 09:49:08AM -0400, Anne Archibald wrote:
 On 23/05/07, Albert Strasheim [EMAIL PROTECTED] wrote:

  Is it correct that the F_CONTIGUOUS flag is set in the case of the fancy
  indexed x? I'm running NumPy 1.0.3.dev3792 here.
 
 Numpy arrays are always stored in contiguous blocks of memory with
 uniform strides. The CONTIGUOUS flag actually means something
 totally different, which is unfortunate, but in any case, fancy
 indexing can't be done as a simple reindexing operation. It must make
 a copy of the array. So what you're seeing is the flags of a fresh new
 array, created from scratch (and numpy always creates arrays in C
 order internally, though that is an implementation detail you should
 not rely on).

That still doesn't explain

In [41]: N.zeros((3,2))[:,[0,1]].flags
Out[41]: 
  C_CONTIGUOUS : False
  F_CONTIGUOUS : True 
  OWNDATA : False 
  WRITEABLE : True
  ALIGNED : True
  UPDATEIFCOPY : False

vs.

In [40]: N.zeros((3,2),order='F')[:,[0,1]].flags
Out[40]: 
  C_CONTIGUOUS : True
  F_CONTIGUOUS : False 
  OWNDATA : False  
  WRITEABLE : True
  ALIGNED : True
  UPDATEIFCOPY : False

Maybe the Fortran-ordering quiz at

http://mentat.za.net/numpy/quiz

needs an update! :)

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Buildbot for numpy

2007-06-16 Thread Stefan van der Walt
Hi all,

Short version
=

We now have a numpy buildbot running at

http://buildbot.scipy.org

Long version


Albert Strasheim and I set up a buildbot for numpy this week.  For
those of you unfamiliar with The Buildbot, it is


...a system to automate the compile/test cycle required by most
software projects to validate code changes. By automatically
rebuilding and testing the tree each time something has changed, build
problems are pinpointed quickly, before other developers are
inconvenienced by the failure. The guilty developer can be identified
and harassed without human intervention. By running the builds on a
variety of platforms, developers who do not have the facilities to
test their changes everywhere before checkin will at least know
shortly afterwards whether they have broken the build or not. Warning
counts, lint checks, image size, compile time, and other build
parameters can be tracked over time, are more visible, and are
therefore easier to improve.

The overall goal is to reduce tree breakage and provide a platform to
run tests or code-quality checks that are too annoying or pedantic for
any human to waste their time with. Developers get immediate (and
potentially public) feedback about their changes, encouraging them to
be more careful about testing before checkin.


While we are still working on automatic e-mail notifications, the
system already provides valuable feedback -- take a look at the
waterfall display:

http://buildbot.scipy.org

If your platform is not currently on the list, please consider
volunteering a machine as a build slave.  This machine will be
required to run the buildbot client, and to build a new version of
numpy whenever changes are made to the repository.  (The machine does
not have to be dedicated to this task, and can be your own
workstation.)

We'd like to thank Robert Kern, Jeff Strunk and Gert-Jan van Rooyen
who helped us to get the ball rolling, as well as Neilen Marais for
offering his workstation as a build slave.

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] question about numpy

2007-06-17 Thread Stefan van der Walt
On Fri, Jun 15, 2007 at 03:44:37PM -0400, David M. Cooke wrote:
  I meet a problem when I installed numpy. I installed numpy by the command
  python setup.py install. Then I tested it by python -c 'import numpy;
  numpy.test()'. But it doesn't work. There is an error message:
  Running from numpy source directory.
 
 ^ don't do that :)
 
 Instead, change out of the source directory, and rerun.

Is there any reason why we can't make that work?

Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] question about numpy

2007-06-19 Thread Stefan van der Walt
On Tue, Jun 19, 2007 at 05:06:42PM +0900, David Cournapeau wrote:
 Robert Kern wrote:
  Stefan van der Walt wrote:
  On Fri, Jun 15, 2007 at 03:44:37PM -0400, David M. Cooke wrote:
  I meet a problem when I installed numpy. I installed numpy by the command
  python setup.py install. Then I tested it by python -c 'import numpy;
  numpy.test()'. But it doesn't work. There is an error message:
  Running from numpy source directory.
  ^ don't do that :)
 
  Instead, change out of the source directory, and rerun.
  Is there any reason why we can't make that work?
 
  We have to be able to bootstrap the build process somehow.
 
 wouldn't it work with the upcoming new import semantics in python 2.6 ? 
 The problem is that you cannot make the difference between $PWD/numpy 
 and $PYTHONPATH/numpy, or is this more subtle ?

I think part of the problem is that the extensions are built into some
temporary directory (and I don't know of a way to query distutils for
its location), which must also appear on the path for the tests to
function properly.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this an indexing bug?

2007-06-19 Thread Stefan van der Walt
On Tue, Jun 19, 2007 at 12:35:05PM +0200, Sturla Molden wrote:
 On 6/19/2007 12:14 PM, Sturla Molden wrote:
 
  h[0,:,numpy.arange(14)] is a case of sdvanced indexing. You can also 
  see that
  
h[0,:,[0,1,2,3,4,5,6,7,8,9,10,11,12,13]].shape
  (14, 4)
 
 Another way to explain this is that numpy.arange(14) and 
 [0,1,2,3,4,5,6,7,8,9,10,11,12,13] is a sequence (i.e. iterator).  So 
 when NumPy iterates the sequence, the iterator yields a single integer, 
 lets call it I. Using this integer as an index to h, gives a = h[0,:,I] 
 which has shape=(4,). This gives us a sequence of arrays of length 4. In 

If you follow this analogy,

x = N.arange(100).reshape((10,10))
x[:,N.arange(5)].shape

should be (5, 10), while in reality it is (10, 5).

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Help installing numpy 1.0.2 on LINUX

2007-06-24 Thread Stefan van der Walt
On Sun, Jun 24, 2007 at 05:58:33PM +0100, John Pruce wrote:
 When I try to run numpy.test(level=1) I get:
 
  import numpy
  numpy.test(level=1)
 Traceback (most recent call last)
File stdln. line 1, in  module
 AttributeError: 'module' has no attribute 'test'
 
 
 Thank you for your help or suggestions.

Are you running from the numpy source directory?  If so, change out of
it and try again.

Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Should bool_ subclass int?

2007-07-14 Thread Stefan van der Walt
On Mon, Jul 09, 2007 at 12:32:02PM -0700, Timothy Hochberg wrote:
 I gave this a try. Since so much code is auto-generated, it can be difficult 
 to
 figure out what's going on in the core matrix stuff. Still, it seems like the
 solution is almost absurdly easy, consisting of changing only three lines.
 First off, does this seem right? Code compiled against this patch passes all
 tests and seems to run my application right, but that's not conclusive.
 
 Please let me know if I missed something obvious. 

Can we make this change, or should we discuss the patch further?  Any
comments, Travis?

Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Finalising documentation guidelines for NumPy

2007-07-17 Thread Stefan van der Walt
Hi all,

In May this year, Charles Harris posted to this mailing list

http://thread.gmane.org/gmane.comp.python.numeric.general/15381/focus=15407

discussing some shortcomings of the current NumPy (and hence SciPy)
documentation standard

http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/doc/HOWTO_DOCUMENT.txt

Since we are in the process of slowly migrating all documentation to
the new standard, it is worth revisiting the issue one last time,
before we put in any more effort.

We need a format which

- parses without errors in epydoc
- generates easily readable output and
- places sections in a pre-determined order

We also need to design a default style sheet, to aid the production of
uniform documentation.

At least the following changes are needed to the current standard:

1) In the parameters section,

   var1 : type
   Description.

   is parsed correctly but

   var1 :
   Description.

   breaks.  This can be fixed either by omitting the colon after
   'var1' in the second case, or by slightly modifying epydoc's output.

2) In the SeeAlso section, variables should be surrounded by `` to
   link to their relevant docstrings, i.e.

   :SeeAlso:
 - otherfunc : relationship to thisfunc.

   changes to

   :SeeAlso:
 - `otherfunc` : relationship to thisfunc.

According to a post in the thread mentioned above, epydoc also
permutes the sections in such a way that Notes and Examples appear in
the wrong places.  As far as I can tell, this is an epydoc issue,
which we should take up with Ed Loper.

If you have any information that could help us finalise the
specification, or would like to share your opinion, I'd be glad to
hear it.

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] getting numPy happening for sciPy

2007-07-23 Thread Stefan van der Walt
Hi Tim

On Mon, Jul 23, 2007 at 08:20:24PM +0930, Tim Mortimer wrote:
 I am not an experienced programmer, so the idea of building NumPy from 
 the bleeding edge repository is beyond my capability, as there appears 
 to be no specific instructions for how to do this (that don't assume you 
 have some degree of experience at what your doing anyway.. )
 
 So I guess my question is
 
 1) can i get an idiots guide to what's required to get the current NumPy 
 installation happening in order to host SciPy on top of it?

One way is to use Enthoughts egg installer:

http://code.enthought.com/enstaller/

That way, you won't have linear algebra routines optimised
specifically for your platform, but you'll have a fully functional
numpy, scipy (and optionally matplotlib etc.) installation.

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Compile extension modules with Visual Studio 2005

2007-07-25 Thread Stefan van der Walt
On Wed, Jul 25, 2007 at 03:41:37PM +0200, Gael Varoquaux wrote:
 On Wed, Jul 25, 2007 at 06:38:55AM -0700, Ray Schumacher wrote:
  The codeGenerator is magic, if you ask me:
  http://starship.python.net/crew/theller/ctypes/old/codegen.html
 
 Can it wrap code passing around arrays ? If so it really does magic that
 I don't understand.

If your array is contiguous, it really is only a matter of passing
along a pointer and dimensions.

By writing your C-functions in the form

void func(double* data, int rows, int cols, double* out) { }

wrapping becomes trivial.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] build on windows 64-bit platform

2007-07-27 Thread Stefan van der Walt
On Fri, Jul 27, 2007 at 04:54:45PM +0200, Pearu Peterson wrote:
 
 
 Stefan van der Walt wrote:
  Hi all,
  
  The build is still failing on winXP 64-bit, as shown on the buildbot
  page
  
  http://buildbot.scipy.org/Windows%20XP%20x86_64%20MSVC/builds/25/step-shell/0
  
  with the error
  
  AttributeError: MSVCCompiler instance has no attribute '_MSVCCompiler__root'
  
  Could someone familiar with the MSVC compilers please take a look?
 
 I think the problem is in the environment of the buildbot machine 
 `Windows XP x86_64 MSVC`. Basically, I would try setting the following 
 environment variables in this machine:
DISTUTILS_USE_SDK and MSSdk
 Then the build might succeed.
 
 For more information, read the code in Python distutils/msvccompiler.py 
 file.

Thanks, Pearu -- I'll take a look.  Why the uninformative error
message, though?  Isn't distutils supposed to automagically detect the
MSVC compiler?

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] build on windows 64-bit platform

2007-07-27 Thread Stefan van der Walt
Hi all,

The build is still failing on winXP 64-bit, as shown on the buildbot
page

http://buildbot.scipy.org/Windows%20XP%20x86_64%20MSVC/builds/25/step-shell/0

with the error

AttributeError: MSVCCompiler instance has no attribute '_MSVCCompiler__root'

Could someone familiar with the MSVC compilers please take a look?

Thanks
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] build on windows 64-bit platform

2007-07-27 Thread Stefan van der Walt
On Sat, Jul 28, 2007 at 12:54:52AM +0200, Pearu Peterson wrote:
 Ok, I have now enabled DISTUTILS_USE_SDK for
 AMD64 Windows platform and it seems working..

Fantastic, thanks!

 However, the build still fails but now the
 reason seems to be related to numpy ticket 164:
 
http://projects.scipy.org/scipy/numpy/ticket/164

I'll ask Albert whether he would have a look at it again.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] reading 10 bit raw data into an array

2007-07-30 Thread Stefan van der Walt
On Mon, Jul 30, 2007 at 04:01:46PM +0200, Danny Chan wrote:
 I'm trying to read a data file that contains a raw image file. Every pixel is
 assigned a value from 0 to 1023, and all pixels are stored from top left to
 bottom right pixel in binary format in this file. I know the width and the
 height of the image, so all that would be required is to read 10 bits at a 
 time
 and store it these as an integer. I played around with the fromstring and
 fromfile function, and I read the documentation for dtype objects, but I'm
 still confused. It seems simple enough to read data in a format with a 
 standard
 bitwidth, but how can I read data in a non-standard format. Can
 anyone help?

AFAIK, numpy's dtypes all have widths = 1 byte.  The easiest solution
I can think of is to use fromfile to read 5 bytes at a time, and then
to use divmod to obtain your 4 values.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy arrays, data allocation and SIMD alignement

2007-08-08 Thread Stefan van der Walt
On Tue, Aug 07, 2007 at 01:33:24AM -0400, Anne Archibald wrote:
 Well, it can be done in Python: just allocate a too-big ndarray and
 take a slice that's the right shape and has the right alignment. But
 this sucks.

Could you explain to me why is this such a bad idea?

Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] vectorized function inside a class

2007-08-08 Thread Stefan van der Walt
Hi Mark

On Wed, Aug 08, 2007 at 03:37:09PM -, mark wrote:
 I am trying to figure out a way to define a vectorized function inside
 a class.
 This is what I tried:
 
 class test:
   def __init__(self):
   self.x = 3.0
   def func(self,y):
   rv = self.x
   if y  self.x: rv = y
   return rv
   f = vectorize(func)
 
 
  m = test()
  m.f( m, [-20,4,6] )
 array([ 3.,  4.,  6.])

Maybe you don't need to use vectorize.  How about

def func(self,y):
y = y.copy()
y[y = self.x] = self.x
return y

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] vectorized function inside a class

2007-08-08 Thread Stefan van der Walt
On Wed, Aug 08, 2007 at 08:54:18AM -0700, Timothy Hochberg wrote:
 Don't use vectorize? Something like:
 
 def f(self,y):
 return np.where(y  self.x, y, self.x)

A one-liner, cool.  Benchmarks on some other methods:

Method 1: N.where

100 loops, best of 3: 9.32 ms per loop

Method 2: N.clip

1000 loops, best of 3: 112 ns per loop

100 loops, best of 3: 3.33 ms per loop

Method 3: N.putmask

100 loops, best of 3: 5.95 ms per loop

Method 4: fancy indexing

100 loops, best of 3: 5.09 ms per loop

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy arrays, data allocation and SIMD alignement

2007-08-09 Thread Stefan van der Walt
On Thu, Aug 09, 2007 at 04:52:38PM +0900, David Cournapeau wrote:
 Charles R Harris wrote:
 
  Well, what you want might be very easy to do in python, we just need 
  to check the default alignments for doubles and floats for some of the 
  other compilers, architectures, and OS's out there. On the other hand, 
  you might not be able to request a c malloc that is aligned in a 
  portable way without resorting to the same tricks as you do in python. 
  So why not use python and get the reference counting and garbage 
  collection along with it?
 First, doing it in python means that I cannot use the facility from C 
 easily. But this is exactly where I need it, and where I would guess 
 most people need it. People want to interface numpy with the mkl ? They 
 will do it in C, right ?

It doesn't really matter where the memory allocation occurs, does it?
As far as I understand, the underlying fftw function has some flag to
indicate when the data is aligned.  If so, we could expose that flag
in Python, and do something like

x = align16(data)
_fft(x, is_aligned=True)

I am not intimately familiar with the fft wrappers, so maybe I'm
missing something more fundamental.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Finding a row match within a numpy array

2007-08-18 Thread Stefan van der Walt
On Tue, Aug 14, 2007 at 11:53:03AM +0100, Andy Cheesman wrote:
 Dear nice people
 
 I'm trying to match a row (b) within a large numpy array (a). My most
 successful attempt is below
 
 hit = equal(b, a)
 total_hits = add.reduce(hit, 1)
 max_hit = argmax(total_hits, 0)
 answer = a[max_hit]
 
 where ...
 a = array([[ 0,  1,  2,  3],
  [ 4,  5,  6,  7],
  [ 8,  9, 10, 11],
  [12, 13, 14, 15]])
 
 b = array([8,  9, 10, 11])
 
 
 
 I was wondering if people could suggest a possible more efficient route
 as there seems to be numerous steps.

Another way to do it:

a[N.apply_along_axis(N.all,1,a==b)]

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Finding a row match within a numpy array

2007-08-18 Thread Stefan van der Walt
On Tue, Aug 14, 2007 at 11:53:03AM +0100, Andy Cheesman wrote:
 Dear nice people
 
 I'm trying to match a row (b) within a large numpy array (a). My most
 successful attempt is below
 
 hit = equal(b, a)
 total_hits = add.reduce(hit, 1)
 max_hit = argmax(total_hits, 0)
 answer = a[max_hit]
 
 where ...
 a = array([[ 0,  1,  2,  3],
  [ 4,  5,  6,  7],
  [ 8,  9, 10, 11],
  [12, 13, 14, 15]])
 
 b = array([8,  9, 10, 11])
 
 
 
 I was wondering if people could suggest a possible more efficient route
 as there seems to be numerous steps.

For large arrays, you may not want to calculate a == b, so you could
also do

[row for row in a if N.all(row == b)]

or find the indices using

[r for r,row in enumerate(a) if N.all(row == b)]

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Setting numpy record array elements

2007-08-20 Thread Stefan van der Walt
On Mon, Aug 20, 2007 at 08:34:53AM -0500, Sameer DCosta wrote:
 In the example below I have a record array *a* that has a column
 *col1. I am trying to set the first element of a.col1 to zero  in two
 different ways.
 
 1. a[0].col1 = 0 (This fails silently)
 2. a.col1[0] = 0 (This works fine)
 
 I am using the latest svn version of numpy. Is this a bug? or is the
 first method supposed to fail? If it is supposed to fail should it
 fail silently?

This looks like a bug, since

a[0][0] = 0

works fine.  I'll take a closer look and make sure.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] problem on testing numpy

2007-08-26 Thread Stefan van der Walt
On Fri, Aug 24, 2007 at 05:46:55PM +0200, Vivian Tini wrote:
 Dear All,
 
 I have just installed NumPy and I am excited to test it.
 Since I have no access as root then I installed Numpy in my home directory. 
 The following messages appears as I tried some commands:
 
  import numpy 
 Running from numpy source directory

^^^ You shouldn't be running from the source directory.  Change to
another directory and try again -- it should work.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] possibly ctypes related segfault

2007-08-27 Thread Stefan van der Walt
On Mon, Aug 27, 2007 at 08:21:43PM +0200, Lino Mastrodomenico wrote:
 Hi Martin,
 
 2007/8/27, Martin Wiechert [EMAIL PROTECTED]:
  I could not reproduce the bug in a debug build of python 
  (--without-pymalloc)
  or on another machine. The crashing machine is an eight-way opteron.
 
 Not sure if it's related to your problem, but on 64-bit architectures
 sizeof(ssize_t) is 8.

You should be able to circumvent this problem by referring to
ctypes.c_size_t or ctypes.int instead of specifying the width
explicitly.

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.random.multinomial() cannot handle zero's

2007-08-27 Thread Stefan van der Walt
Hi Chris

On Mon, Aug 27, 2007 at 11:07:00AM -0700, Christopher Barker wrote:
 Is the kahan_sum closer? -- it should be, though compensated summation 
   is really for adding LOTS of numbers, for 4, it's pointless at best. 
 Anyway, binary floating point has its errors, and compensated summation 
 can help, but it's still not exact for numbers that can't be exactly 
 represented by binary.
 
 i.e. if your result is within 15 decimal digits of the exact result, 
 that's as good as it gets.

I find this behaviour odd for addition.  Under python:

In [7]: 0.8+0.2  1.0
Out[7]: False

but using the Pyrex module, it yields true.  You can find the code at

http://mentat.za.net/html/refer/somesumbug.tar.bz2

and compile it using

pyrexc sum.pyx ; python setup.py build_ext -i

When you run the test, it illustrates the problem:

Sum: 1.00
Is greater than 1.0? True

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.random.multinomial() cannot handle zero's

2007-08-28 Thread Stefan van der Walt
On Mon, Aug 27, 2007 at 04:54:21PM -0700, Christopher Barker wrote:
 Stefan van der Walt wrote:
  but using the Pyrex module, it yields true.  You can find the code at
  
  http://mentat.za.net/html/refer/somesumbug.tar.bz2
 
 That link appears to be broken.

Sorry, http://mentat.za.net/refer/somesumbug.tar.bz2

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] possibly ctypes related segfault

2007-08-28 Thread Stefan van der Walt
Hi Martin

On Mon, Aug 27, 2007 at 02:57:28PM +0200, Martin Wiechert wrote:
 I'm suffering from a strange segfault and would appreciate your help.
 
 I'm calling a small C function using ctypes / numpy.ctypeslib. The function 
 works in the sense that it returns correct results. After calling the 
 function however I can reliably evoke a segfault by using readline tab 
 completion.
 
 I'm not very experienced, but this smells like a memory management bug to me, 
 which is strange, because I'm not doing any mallocing/freeing at all in the C 
 code.
 
 I could not reproduce the bug in a debug build of python (--without-pymalloc) 
 or on another machine. The crashing machine is an eight-way opteron.

I had to #include unistd.h in solver, and modify cMonteCarlo not to
depend on GV.  Then, I used

gcc -o solver.os -c -O2 -ggdb -Wall -ansi -pedantic -fPIC solver.c
gcc -o librectify.so -shared solver.os -llapack

to compile.

Please send me the script that excercises the solver, then I will test
on my machines here.

 --16266-- DWARF2 CFI reader: unhandled CFI instruction 0:10
 --16266-- DWARF2 CFI reader: unhandled CFI instruction 0:10

This could be a valgrind issue.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] possibly ctypes related segfault

2007-08-28 Thread Stefan van der Walt
On Tue, Aug 28, 2007 at 02:03:52PM +0200, Martin Wiechert wrote:
 Here's the test script. I'm using it via execfile from an interactive 
 session, 
 so I can inspect (and crash with readline) afterwards.
 
 Here's how I compiled:
 gcc 
 solver.c -fPIC -ggdb -shared -llapack -lf77blas -lcblas -latlas -lgfortran -o 
 librectify.so

It works perfectly on the two Linux machines I tried (32-bit and
64-bit).  Maybe your lapack isn't healthy?

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Error code of NumpyTest()

2007-08-30 Thread Stefan van der Walt
On Thu, Aug 30, 2007 at 12:48:44PM +0300, Pearu Peterson wrote:
 The svn version of test() function now returns TestResult object.
 
 So, test() calls in buildbot should read:
 
   import numpy,sys; sys.exit(not
 numpy.test(verbosity=,level=).wasSuccessful())
 
 Hopefully buildbot admins can update the test commands accordingly.

Thanks, Pearu.  I forwarded your instructions to the relevant parties.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Fwd: Re: numpy revision 2680 causes segfault on Solaris]

2007-09-20 Thread Stefan van der Walt
Hi Chris

Does this problem persist?  I thought Eric's patch fixed it.  Goes to
show, we really need a Big Endian buildbot client.

Cheers
Stéfan

On Thu, Sep 20, 2007 at 06:56:46PM -0400, Christopher Hanley wrote:
 Hi Travis,
 
 The test failure was caused by a new test being added to the test suite 
 to catch an existing problem.  It was not a new code change that caused 
 the problem.
 
 Chris
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Fwd: Re: numpy revision 2680 causes segfault on Solaris]

2007-09-20 Thread Stefan van der Walt
Hi Chris

On Thu, Sep 20, 2007 at 09:30:18PM -0400, Christopher Hanley wrote:
 We have not seen any test failures on our big-endian Solaris system.  
 Did you re-implement the unit test that was failing.  I was under the 
 impression that the fix had been to comment out the test the was 
 failing.  I was un-aware that any patch was in place.

We (mostly Eric) fixed the problem.  The test was then re-activated.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANN: SciPy 0.6.0

2007-09-21 Thread Stefan van der Walt
- Forwarded message from Jarrod Millman [EMAIL PROTECTED] -

From: Jarrod Millman [EMAIL PROTECTED]
To: SciPy Users List [EMAIL PROTECTED]
Subject: [SciPy-user] ANN: SciPy 0.6.0
Reply-To: SciPy Users List [EMAIL PROTECTED]
Date: Fri, 21 Sep 2007 02:04:32 -0700
Message-ID: [EMAIL PROTECTED]

I'm pleased to announce the release of SciPy 0.6.0:
http://scipy.org/Download

SciPy is a package of tools for science and engineering for Python.  It includes
modules for statistics, optimization, integration, linear algebra,
Fourier transforms,
signal and image processing, ODE solvers, and more.

This release brings many bugfixes and speed improvements.

Major changes since 0.5.2.1:

   * cluster
 o cleaned up kmeans code and added a kmeans2 function
that adds several initialization methods
   * fftpack
 o fft speedups for complex data
 o major overhaul of fft source code for easier maintenance
   * interpolate
 o add Lagrange interpolating polynomial
 o fix interp1d so that it works for higher order splines
   * io
 o read and write basic .wav files
   * linalg
 o add Cholesky decomposition and solution of banded linear
systems with Hermitian or symmetric matrices
 o add RQ decomposition
   * ndimage
 o port to NumPy API
 o fix byteswapping problem in rotate
 o better support for 64-bit platforms
   * optimize
 o nonlinear solvers module added
 o a lot of bugfixes and modernization
   * signal
 o add complex Morlet wavelet
   * sparse
 o significant performance improvements

Thank you to everybody who contributed to the recent release.

Enjoy,

-- 
Jarrod Millman
Computational Infrastructure for Research Labs
10 Giannini Hall, UC Berkeley
phone: 510.643.4014
http://cirl.berkeley.edu/
___
SciPy-user mailing list
[EMAIL PROTECTED]
http://projects.scipy.org/mailman/listinfo/scipy-user

- End forwarded message -
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Using the numpy buildbot for svn branches ?

2007-09-24 Thread Stefan van der Walt
On Mon, Sep 24, 2007 at 09:05:57PM +0900, David Cournapeau wrote:
  It should have worked with the first solution. Did you try trunk, to 
  see if it works ?
 It does not seem to work with only trunk.
  Is there somewhere the configuration file of the buildbot ?
  with this line for the SVN step, it should work :
 
  factory.addStep(SVN, baseURL=http://svn.scipy.org/svn/numpy/;, 
  defaultBranch=trunk)
 I don't know if this is relevant, but in the html generated from the 
 trace, there is the following 
 (http://buildbot.scipy.org/Linux%20x86%20Ubuntu/builds/131/step-svn/0)
 
 svnurl'http://scipy.org/svn/numpy/trunk'
 Locals
 branch'branches/numpy.scons'
 
 Which may indicate that trunk is hardcoded in the svn url ?

It is numpy's buildmaster, so nothing is set in stone :) I modified
the configuration file; looks like it is working.

I left the debug interface online, to see how things go.  If we have
abuse from outside, we shall have to switch it off again, but in the
meantime it remains a very useful tool.  Also, remember that some of
the buildclients are personal workstations, so be conservative in
triggering builds.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] indexing bug?

2007-09-28 Thread Stefan van der Walt
On Fri, Sep 28, 2007 at 03:07:30PM -0400, Nadia Dencheva wrote:
 This should return an error and not silently truncate to int.

Why do you say that?  The current behaviour is consistent and well
defined:

a[x] == a[int(x)]

We certainly can't change it now (just imagine all the code out there
that will break); but I personally don't think it is a big problem.

On a somewhat related note, you may also be interested in the PEP at

http://docs.python.org/whatsnew/pep-357.html

Regards
Stéfan


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Run length encoding of an ndarray

2007-10-02 Thread Stefan van der Walt
On Tue, Oct 02, 2007 at 01:36:02PM +0100, Michael Hoffman wrote:
 I am trying to do a type of run-length encoding of a 2D array by axis. I 
 have an array of values arranged along two axes, state and position. 
 These are many (180, 3) uint8 arrays.
 
 I would like to have a list of tuples like
 
 (state, start_pos, end_pos, values)
 
 only separating out a set of values into a new tuple if they are all the 
 same value in a run of at least 10 cells.

This snippet does run-length encoding for one row.

x = N.array([1,1,1,1,1,1,1,1,1,1,0,0,2,2,9,9,9,9,9,9,9,9,9,9])

pos, = N.where(N.diff(x) != 0)
pos = N.concatenate(([0],pos+1,[len(x)]))
rle = [(a,b,x[a]) for (a,b) in zip(pos[:-1],pos[1:])]

[(0, 10, 1), (10, 12, 0), (12, 14, 2), (14, 24, 9)]

Maybe you can use that as a starting point.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] indexing bug?

2007-10-03 Thread Stefan van der Walt
On Wed, Oct 03, 2007 at 11:12:07AM -0400, Perry Greenfield wrote:

 On Sep 28, 2007, at 4:23 PM, Stefan van der Walt wrote:

 On Fri, Sep 28, 2007 at 03:07:30PM -0400, Nadia Dencheva wrote:
 This should return an error and not silently truncate to int.

 Why do you say that?  The current behaviour is consistent and well
 defined:

 a[x] == a[int(x)]

 I disagree. It is neither consistent nor well defined.

It works for other objects too:

class MyIndex(object):
def __int__(self):
return 1

m = M()

x = N.array([1,2,3]

x[m] == x[1]

 It is not consistent with Python list indexing behavior.

Neither are most other forms of ndarray indexing:

x = [1,2,3]
x[[1,2]]

yields an error.

 It is not consistent with numpy behavior:

  x = arange(10)
  x[array(2.99)]

 raises an exception

That seems to be an exception to the rule.  I agree that both

x[2.99] and x[array(2.99)]

should behave the same way.

 We certainly can't change it now (just imagine all the code out there
 that will break); but I personally don't think it is a big problem.

 I disagree. If people are willing to change the Class API of numpy to be 
 consistent with Python, they certainly should be willing to change this. 
 This behavior is new with numpy, so there should not be a lot of code that 
 depends on it (and shame on those that do :-).

Let me rephrase: we cannot change the API until 1.1, unless this is
seen as a bug.  To which other API changes are you referring?  The
style changes is a different matter entirely.

My point was that the current behaviour is easy to predict, but I am
not especially partial on the matter.

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] indexing bug?

2007-10-03 Thread Stefan van der Walt
On Wed, Oct 03, 2007 at 01:50:01PM -0400, Perry Greenfield wrote:
  Let me rephrase: we cannot change the API until 1.1, unless this is
  seen as a bug.  To which other API changes are you referring?  The
  style changes is a different matter entirely.
 
 The recent numpy and scipy threads on adopting Python Style Guide  
 for classes. If this is what you mean by style changes and it ends  
 up changing array to Array and int32 to Int32, it's not just a style  
 change, it will break lots of code (Since it should effect the user  
 API...). These are API changes even if you consider case of the  
 class name just style.

Don't worry, nothing is being rushed through.  Only internal
representations are currently being changed (e.g. unit tests).  No
changes to the API will be made until 1.1, and then only after the
discussion is over.

I think your quote above refers to the e-mail Jarrod Millman wrote,
but that was simply a typo, and should have read since it shouldn't
affect the user API.

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


  1   2   >