Re: [Numpy-discussion] Finding Unique Pixel Values

2010-07-23 Thread Jon Wright
Ian Mallett wrote:

 To the second, actually, I need to increment the number of times the 
 index is there.  For example, if b=[1,5,6,6,6,9], then a[6-1] would have 
 to be incremented by +3 = +1+1+1.  I tried simply a[b-1]+=1, but it 
 seems to only increment values once, even if there are more than one 
 occurrence of the index in b.  What to do about that?

Is this what you mean?

 numpy.bincount( [1,5,6,6,6,9] )
array([0, 1, 0, 0, 0, 1, 3, 0, 0, 1])


HTH,

Jon


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Control format for array of integers

2010-07-23 Thread John Reid
Hi,

My array of integers is printed like this by default in numpy:

array([[  4.7500e+02,   9.5000e+02,  -1.e+00],
[  2.6090e+03,   9.5000e+02,  -7.0900e+02]])

Can I set an option so that numpy never uses this scientific notation or 
raise the threshold at which it kicks in?

Thanks,
John.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Control format for array of integers

2010-07-23 Thread Ian Mallett
How about numpy.set_printoptions(suppress=True)?
See
http://docs.scipy.org/doc/numpy/reference/generated/numpy.set_printoptions.html
.
HTH,
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Arrays of Python Values

2010-07-23 Thread Pauli Virtanen
Fri, 23 Jul 2010 01:16:41 -0700, Ian Mallett wrote:
[clip]
 Because I've never used arrays of Python objects (and Googling didn't
 turn up any examples), I'm stuck on how to sort the corresponding array
 in NumPy in the same way.

I doubt you will gain any speed by switching from Python lists to Numpy 
arrays containing Python objects.

 Of course, perhaps I'm just trying something that's absolutely
 impossible, or there's an obviously better way.  I get the feeling that
 having no Python objects in the NumPy array would speed things up even
 more, but I couldn't figure out how I'd handle the different attributes
 (or specifically, how to keep them together during a sort).
 
 What're my options?

One option could be to use structured arrays to store the data, instead 
of Python objects.

-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Arrays of Python Values

2010-07-23 Thread Hans Meine
On Friday 23 July 2010 10:16:41 Ian Mallett wrote:
 self.patches.sort( lambda x,y:cmp(x.residual_radiance,y.residual_radiance),
 reverse=True )

Using sort(key = lambda x: x.residual_radiance) should be faster.

 Because I've never used arrays of Python objects (and Googling didn't turn
 up any examples), I'm stuck on how to sort the corresponding array in NumPy
 in the same way.
 
 Of course, perhaps I'm just trying something that's absolutely impossible,
 or there's an obviously better way.  I get the feeling that having no
 Python objects in the NumPy array would speed things up even more, but [...]

Exactly.

Maybe you can use record arrays?  (Have a look at complex dtypes, i.e. 
struct-like dtypes for arrays.)

Never tried sort on these, but I'd hope that's possible.

HTH,
  Hans
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Audiolab 0.11.0

2010-07-23 Thread David Cournapeau
Hi,

I am pleased to announce the 0.11.0 release of audiolab scikits, the
package to read/write audio file formats into numpy. This release has
barely no changes compared to 0.10.x series, but it finally fixes
annoying windows issues (which ended up being mingw bugs).

Source tarball and python 2.6 windows installer are available on pypi,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] subtract.reduce behavior

2010-07-23 Thread Alan G Isaac
On 7/22/2010 4:00 PM, Johann Hibschman wrote:
 I'm trying to understand numpy.subtract.reduce.  The documentation
 doesn't seem to match the behavior.  The documentation claims

For a one-dimensional array, reduce produces results equivalent to:

r = op.identity
for i in xrange(len(A)):
   r = op(r,A[i])
return r

 However, numpy.subtract.reduce([1,2,3]) gives me 1-2-3==-4, not
 0-1-2-3==-6.


The behavior does not quite match Python's reduce.
The rule seems to be:
return the *right identity* for empty arrays,
otherwise behave like Python's reduce.

  import operator as o
  reduce(o.sub, [1,2,3], 0)
 -6
  reduce(o.sub, [1,2,3])
 -4
  reduce(o.sub, [])
 Traceback (most recent call last):
   File stdin, line 1, in module
 TypeError: reduce() of empty sequence with no initial value
  np.subtract.reduce([])
 0.0

Getting a right identity for an empty array is surprising.
Matching Python's behavior (raising a TypeError) seems desirable. (?)

Unfortunately Python's reduce does not make ``initializer`` a
keyword, but maybe NumPy could add this keyword anyway?
(Not sure that's a good idea.)

Alan Isaac

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] subtract.reduce behavior

2010-07-23 Thread Pauli Virtanen
Fri, 23 Jul 2010 10:29:47 -0400, Alan G Isaac wrote:
[clip]
   np.subtract.reduce([])
  0.0
 
 Getting a right identity for an empty array is surprising. Matching
 Python's behavior (raising a TypeError) seems desirable. (?)

I don't think matching Python's behavior is a sufficient argument for a 
change. As far as I see, it'd mostly cause unnecessary breakage, with no 
significant gain.

Besides, it's rather common to define

sum_{z in Z} z = 0
prod_{z in Z} z = 1

if Z is an empty set -- this can then be extended to other reduction 
operations. Note that changing reduce behavior would require us to 
special-case the above two operations.

-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy installation problem

2010-07-23 Thread Jonathan Tu
Hi,I am trying to install Numpy on a Linux cluster running RHEL4. I installed a local copy of Python 2.7 because RHEL4 uses Python 2.3.4 for various internal functionalities. I downloaded the Numpy source code usingsvn co http://svn.scipy.org/svn/numpy/trunk numpyand then I tried to build usingpython setup.py buildThis resulted in the following error:gcc: numpy/linalg/lapack_litemodule.c
gcc: numpy/linalg/python_xerbla.c
/usr/bin/g77 -g -Wall -g -Wall -shared build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o -L/usr/lib64/ATLAS -Lbuild/temp.linux-x86_64-2.7 -llapack -lptf77blas -lptcblas -latlas -lg2c -o build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so
/usr/bin/ld: /usr/lib64/ATLAS/liblapack.a(dgeev.o): relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC
/usr/lib64/ATLAS/liblapack.a: could not read symbols: Bad value
collect2: ld returned 1 exit status
/usr/bin/ld: /usr/lib64/ATLAS/liblapack.a(dgeev.o): relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC
/usr/lib64/ATLAS/liblapack.a: could not read symbols: Bad value
collect2: ld returned 1 exit status
error: Command "/usr/bin/g77 -g -Wall -g -Wall -shared build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o -L/usr/lib64/ATLAS -Lbuild/temp.linux-x86_64-2.7 -llapack -lptf77blas -lptcblas -latlas -lg2c -o build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so" failed with exit status 1Full details of the output are attached in stdout.txt and stderr.txt.  I thought maybe it was a compiler error so I triedpython setup.py build -fcompiler=gnubut this also resulted in errors as well (stdout_2.txt, stderr_2.txt).I just noticed that on both attempts, it is complaining that it can't find a Fortran 90 compiler.  I'm not sure if I have the right compiler available.  On this cluster I have the following modules: /usr/share/Modules/modulefiles dot module-cvs module-info modules   nulluse.own   /usr/local/share/Modules/modulefiles mpich/gcc/1.2.7p1/64  openmpi/gcc-ib/1.2.3/64mpich/intel/1.2.7dmcrp1/64   openmpi/gcc-ib/1.2.5/64mpich/intel/1.2.7p1/64 openmpi/intel/1.2.3/64mpich/pgi-7.1/1.2.7p1/64openmpi/intel-11.0/1.2.8/64  mpich-debug/gcc/1.2.7p1/64   openmpi/intel-9.1/1.2.8/64  mpich-debug/intel/1.2.7p1/64  openmpi/intel-ib/1.1.5/64   mpich-debug/pgi-7.1/1.2.7p1/64 openmpi/intel-ib/1.2.3/64   mvapich/gcc/0.9.9/64  openmpi/intel-ib/1.2.5/64   mvapich/pgi-7.1/0.9.9/64openmpi/pgi-7.0/1.2.3/64   openmpi/gcc/1.2.8/64  openmpi/pgi-7.1/1.2.5/64   openmpi/gcc/1.3.0/64  openmpi/pgi-7.1/1.2.8/64   openmpi/gcc-ib/1.1.5/64openmpi/pgi-8.0/1.2.8/64    /opt/share/Modules/modulefiles intel/10.0/64/C/10.0.026intel/9.1/64/default intel/10.0/64/Fortran/10.0.026 intel-mkl/10/64intel/10.0/64/Iidb/10.0.026  intel-mkl/10.1/64   intel/10.0/64/default intel-mkl/9/32intel/11.1/64/11.1.038 intel-mkl/9/64intel/11.1/64/11.1.072 pgi/7.0/64  intel/9.1/64/C/9.1.045 pgi/7.1/64  intel/9.1/64/Fortran/9.1.040  pgi/8.0/64  intel/9.1/64/Iidb/9.1.045   If anyone has any ideas, they would be greatly appreciated!  I am new to Linux and am unsure how to fix this problem.Jonathan TuRunning from numpy source directory.error: Command /usr/bin/g77 -g -Wall -g 
-Wall -shared build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o 
build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o -L/usr/lib64/ATLAS 
-Lbuild/temp.linux-x86_64-2.7 -llapack -lptf77blas -lptcblas -latlas -lg2c -o 
build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so failed with exit status 
1
non-existing path in 'numpy/distutils': 'site.cfg'
F2PY Version 2_8512
blas_opt_info:
blas_mkl_info:
  libraries mkl,vml,guide not found in /home/jhtu/local/lib
  libraries mkl,vml,guide not found in /usr/local/lib64
  libraries mkl,vml,guide not found in /usr/local/lib
  libraries mkl,vml,guide not found in /usr/lib64
  libraries mkl,vml,guide not found in /usr/lib
  NOT AVAILABLE

atlas_blas_threads_info:
Setting PTATLAS=ATLAS
  libraries ptf77blas,ptcblas,atlas not found in /home/jhtu/local/lib
  libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib64
  libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib
Setting PTATLAS=ATLAS
Setting PTATLAS=ATLAS
  FOUND:
libraries = ['ptf77blas', 'ptcblas', 'atlas']
library_dirs = ['/usr/lib64/ATLAS']
language = c
include_dirs = ['/usr/include/ATLAS']

customize GnuFCompiler
Found executable /usr/bin/g77
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
customize GnuFCompiler
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
customize GnuFCompiler using config
compiling '_configtest.c':

/* 

Re: [Numpy-discussion] subtract.reduce behavior

2010-07-23 Thread Alan G Isaac
 Fri, 23 Jul 2010 10:29:47 -0400, Alan G Isaac wrote:
 np.subtract.reduce([])
   0.0

 Getting a right identity for an empty array is surprising. Matching
 Python's behavior (raising a TypeError) seems desirable. (?)
  


On 7/23/2010 10:37 AM, Pauli Virtanen wrote:
 I don't think matching Python's behavior is a sufficient argument for a
 change. As far as I see, it'd mostly cause unnecessary breakage, with no
 significant gain.

 Besides, it's rather common to define

   sum_{z in Z} z = 0
   prod_{z in Z} z = 1

 if Z is an empty set -- this can then be extended to other reduction
 operations. Note that changing reduce behavior would require us to
 special-case the above two operations.


To reduce (pun intended) surprise is always a significant gain.

I don't understand the notion of extend you introduce here.
The natural extension is to take a start value,
as with Python's ``reduce``.  Providing a default start value
is natural for operators with an identity and is not for
those without, and correspondingly we end up with ``sum``
and ``prod`` functions (which match reduce with the obvious
default start value) but no equivalents for subtraction
and division.

I also do not understand why there would have to be any
special cases.

Returning a *right* identity for an operation that is
otherwise a *left* fold is very odd, no matter how you slice it.
That is what looks like special casing ...

Alan Isaac

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] subtract.reduce behavior

2010-07-23 Thread Pauli Virtanen
Fri, 23 Jul 2010 11:17:56 -0400, Alan G Isaac wrote:
[clip]
 I also do not understand why there would have to be any special cases.

That's a technical issue: e.g. prod() is implemented via 
np.multiply.reduce, and it is not clear to me whether it is possible, in 
the ufunc machinery, to leave the identity undefined or whether it is 
needed in some code paths (as the right identity).

It's possible to define binary Ufuncs without an identity element (e.g. 
scipy.special.beta), so in principle the machinery to do the right thing 
is there.

 Returning a *right* identity for an operation that is otherwise a *left*
 fold is very odd, no matter how you slice it. That is what looks like
 special casing...

I think I see your point now.

-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] reviewers needed for NumPy

2010-07-23 Thread Joe Harrington
Hi folks,

We are (finally) about to begin reviewing and proofing the NumPy
docstrings!  This is the final step in producing professional-level
docs for NumPy.  What we need now are people willing to review docs.

There are two types of reviewers:

Technical reviewers should be developers or *very* experienced NumPy
users.  Technical review entails checking the source code (it's
available on a click in the doc wiki) and reading the doc to ensure
that the signature and description are both correct and complete.

Presentation reviewers need to be modestly experienced with NumPy, and
should have some experience either in technical writing or as
educators.  Their job is to make sure the docstring is understandable
to the target audience (one level below the expected user of that
item), including appropriate examples and references.

Review entails reading each page, checking that it meets the review
standards, and either approving it or saying how it doesn't meet them.
All this takes place on the doc wiki, so the mechanics are easy.

Please post a message on scipy-dev if you are interested in becoming a
reviewer, or if you have questions about reviewing.  As a volunteer
reviewer, you can put as much or as little time into this as you like.

Thanks!

--jh-- for the SciPy Documentation Project team
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.4.1 fails to build on (Debian) alpha and powepc

2010-07-23 Thread Sandro Tosi
Hi David  others,

On Tue, Jul 20, 2010 at 19:09, David Cournapeau courn...@gmail.com wrote:
 On Tue, Jul 20, 2010 at 4:21 PM, Sandro Tosi mo...@debian.org wrote:
 Hi David,

 On Tue, Jul 20, 2010 at 10:34, David Cournapeau courn...@gmail.com wrote:
 yes, I see it at r8510

 I quickly adapted the code from the Sun math library for the linux ppc
 long double format. Let me know if it works (if possible, you should
 run the test suite).

 thanks for working on it :)

 I checked out numpy at r8511 and built it on powerpc, attached the
 buildlog (it crashed at doc generation, but the setup.py build were
 done before, and went fine).

 I've python2.x setup.py install --prefix install/ and execute the
 tests from there with:

 ~/numpy/install$ PYTHONPATH=lib/python2.6/site-packages/ python2.6 -c
 import numpy; print numpy.test()  ../testlog_2.6
 ~/numpy/install$ PYTHONPATH=lib/python2.5/site-packages/ python2.5 -c
 import numpy; print numpy.test()  ../testlog_2.5

 attached the testlogs too: there are a couple of failures.

 The failures seem to be related to the long double not conforming to
 IEEE754 standard on linux ppc. I am not sure how to deal with them -
 maybe raising a warning if the user uses long double, as its usage
 will always be flaky on that platform anyway (numpy assumes IEEE
 754-like support)

yeah a warning might be nice.

Just to keep you informed, I tested 1.4.1+r8510+r8510 and it builds
fine on the porterbox only showing this failure on test():

FAIL: test_umath.TestComplexFunctions.test_loss_of_precision(type
'numpy.complex64',)
--
Traceback (most recent call last):
  File /usr/lib/pymodules/python2.6/nose/case.py, line 183, in runTest
self.test(*self.arg)
  File /usr/lib/python2.6/dist-packages/numpy/core/tests/test_umath.py,
line 524, in check_loss_of_precision
assert np.all(d  1e-15)
AssertionError

--
Ran 2016 tests in 52.670s

FAILED (KNOWNFAIL=2, failures=1)
nose.result.TextTestResult run=2016 errors=0 failures=1

Hence I decided to upload and the build went fine on the buildd machine [1].

[1] 
https://buildd.debian.org/fetch.cgi?pkg=python-numpyarch=powerpcver=1%3A1.4.1-3stamp=1279914539file=logas=raw

the other big show-stopper for numpy is the fail to build on alpha
you're already aware of: if you need any kind of support, just ask me.

Thanks a lot for your support for powerpc issue!

Cheers,
-- 
Sandro Tosi (aka morph, morpheus, matrixhasu)
My website: http://matrixhasu.altervista.org/
Me at Debian: http://wiki.debian.org/SandroTosi
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Datarray bug tracker

2010-07-23 Thread Keith Goodman
Report datarray bugs here: http://github.com/fperez/datarray/issues

A datarray is a subclass of a Numpy array that adds the ability to
label the axes and to label the elements along each axis.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion