Re: [Numpy-discussion] Multi-distribution Linux wheels - please test

2016-02-09 Thread Nadav Horesh
Do not know what happened --- all test passed, even when removed openblas 
(Nathaniel was right).

Manylinux config:

python -c 'import numpy; print(numpy.__config__.show())'
blas_opt_info:
define_macros = [('HAVE_CBLAS', None)]
libraries = ['openblas']
language = c
library_dirs = ['/usr/local/lib']
lapack_opt_info:
define_macros = [('HAVE_CBLAS', None)]
libraries = ['openblas']
language = c
library_dirs = ['/usr/local/lib']
blas_mkl_info:
  NOT AVAILABLE
openblas_lapack_info:
define_macros = [('HAVE_CBLAS', None)]
libraries = ['openblas']
language = c
library_dirs = ['/usr/local/lib']
openblas_info:
define_macros = [('HAVE_CBLAS', None)]
libraries = ['openblas']
language = c
library_dirs = ['/usr/local/lib']
None


Source installtion:

python -c 'import numpy; print(numpy.__config__.show())'
openblas_info:
library_dirs = ['/usr/local/lib']
libraries = ['openblas', 'openblas']
language = c
runtime_library_dirs = ['/usr/local/lib']
define_macros = [('HAVE_CBLAS', None)]
openblas_lapack_info:
library_dirs = ['/usr/local/lib']
libraries = ['openblas', 'openblas']
language = c
runtime_library_dirs = ['/usr/local/lib']
define_macros = [('HAVE_CBLAS', None)]
lapack_opt_info:
extra_compile_args = ['-g -ftree-vectorize -mtune=native -march=native -O3']
runtime_library_dirs = ['/usr/local/lib']
define_macros = [('HAVE_CBLAS', None)]
libraries = ['openblas', 'openblas', 'atlas', 'f77blas', 'cblas', 'blas']
language = c
library_dirs = ['/usr/local/lib', '/usr/lib']
blas_mkl_info:
  NOT AVAILABLE
blas_opt_info:
extra_compile_args = ['-g -ftree-vectorize -mtune=native -march=native -O3']
runtime_library_dirs = ['/usr/local/lib']
define_macros = [('HAVE_CBLAS', None)]
libraries = ['openblas', 'openblas', 'atlas', 'f77blas', 'cblas', 'blas']
language = c
library_dirs = ['/usr/local/lib', '/usr/lib']
None


From: NumPy-Discussion <numpy-discussion-boun...@scipy.org> on behalf of 
Matthew Brett <matthew.br...@gmail.com>
Sent: 08 February 2016 09:48
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] Multi-distribution Linux wheels - please test

Hi Nadav,

On Sun, Feb 7, 2016 at 11:13 PM, Nathaniel Smith <n...@pobox.com> wrote:
> (This is not relevant to the main topic of the thread, but FYI I think the
> recarray issues are fixed in 1.10.4.)
>
> On Feb 7, 2016 11:10 PM, "Nadav Horesh" <nad...@visionsense.com> wrote:
>>
>> I have atlas-lapack-base installed via pacman (required by sagemath).
>> Since the numpy installation insisted on openblas on /usr/local, I got the
>> openblas source-code and installed  it on /usr/local.
>> BTW, I use 1.11b rather then 1.10.x since the 1.10 is very slow in
>> handling recarrays. For the tests I am erasing the 1.11 installation, and
>> installing the 1.10.4 wheel. I do verify that I have the right version
>> before running the tests, but I am not sure if there no unnoticed side
>> effects.
>>
>> Would it help if I put a side the openblas installation and rerun the
>> test?

Would you mind doing something like this, and posting the output?:

virtualenv test-manylinux
source test-manylinux/bin/activate
pip install -f https://nipy.bic.berkeley.edu/manylinux numpy==1.10.4 nose
python -c 'import numpy; numpy.test()'
python -c 'import numpy; print(numpy.__config__.show())'
deactivate

virtualenv test-from-source
source test-from-source/bin/activate
pip install numpy==1.10.4 nose
python -c 'import numpy; numpy.test()'
python -c 'import numpy; print(numpy.__config__.show())'
deactivate

I'm puzzled that the wheel gives a test error when the source install
does not, and my best guess was an openblas problem, but this just to
make sure we have the output from the exact same numpy version, at
least.

Thanks again,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Multi-distribution Linux wheels - please test

2016-02-07 Thread Nadav Horesh
The reult tests of numpy 1.10.4 installed from source:

OK (KNOWNFAIL=4, SKIP=6)


I think I use openblas, as it is installed instead the normal blas/cblas.

  Nadav,

From: NumPy-Discussion <numpy-discussion-boun...@scipy.org> on behalf of Nadav 
Horesh <nad...@visionsense.com>
Sent: 07 February 2016 07:28
To: Discussion of Numerical Python; SciPy Developers List
Subject: Re: [Numpy-discussion] Multi-distribution Linux wheels - please test

Test platform: python 3.4.1 on archlinux x86_64

scipy test: OK

OK (KNOWNFAIL=97, SKIP=1626)


numpy tests: Failed on long double and int128 tests, and got one error:

Traceback (most recent call last):
  File "/usr/lib/python3.5/site-packages/nose/case.py", line 198, in runTest
self.test(*self.arg)
  File "/usr/lib/python3.5/site-packages/numpy/core/tests/test_longdouble.py", 
line 108, in test_fromstring_missing
np.array([1]))
  File "/usr/lib/python3.5/site-packages/numpy/testing/utils.py", line 296, in 
assert_equal
return assert_array_equal(actual, desired, err_msg, verbose)
  File "/usr/lib/python3.5/site-packages/numpy/testing/utils.py", line 787, in 
assert_array_equal
verbose=verbose, header='Arrays are not equal')
  File "/usr/lib/python3.5/site-packages/numpy/testing/utils.py", line 668, in 
assert_array_compare
raise AssertionError(msg)
AssertionError:
Arrays are not equal

(shapes (6,), (1,) mismatch)
 x: array([ 1., -1.,  3.,  4.,  5.,  6.])
 y: array([1])

--
Ran 6019 tests in 28.029s

FAILED (KNOWNFAIL=13, SKIP=12, errors=1, failures=18




From: NumPy-Discussion <numpy-discussion-boun...@scipy.org> on behalf of 
Matthew Brett <matthew.br...@gmail.com>
Sent: 06 February 2016 22:26
To: Discussion of Numerical Python; SciPy Developers List
Subject: [Numpy-discussion] Multi-distribution Linux wheels - please test

Hi,

As some of you may have seen, Robert McGibbon and Nathaniel have just
guided a PEP for multi-distribution Linux wheels past the approval
process over on distutils-sig:

https://www.python.org/dev/peps/pep-0513/

The PEP includes a docker image on which y'all can build wheels which
match the PEP:

https://quay.io/repository/manylinux/manylinux

Now we're at the stage where we need stress-testing of the built
wheels to find any problems we hadn't thought of.

I've built numpy and scipy wheels here:

https://nipy.bic.berkeley.edu/manylinux/

So, if you have a Linux distribution handy, we would love to hear from
you about the results of testing these guys, maybe on the lines of:

pip install -f https://nipy.bic.berkeley.edu/manylinux numpy scipy
python -c 'import numpy; numpy.test()'
python -c 'import scipy; scipy.test()'

These manylinux wheels should soon be available on pypi, and soon
after, installable with latest pip, so we would like to fix as many
problems as possible before going live.

Cheers,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Multi-distribution Linux wheels - please test

2016-02-07 Thread Nadav Horesh
I have atlas-lapack-base installed via pacman (required by sagemath). Since the 
numpy installation insisted on openblas on /usr/local, I got the openblas 
source-code and installed  it on /usr/local.
BTW, I use 1.11b rather then 1.10.x since the 1.10 is very slow in handling 
recarrays. For the tests I am erasing the 1.11 installation, and installing the 
1.10.4 wheel. I do verify that I have the right version before running the 
tests, but I am not sure if there no unnoticed side effects.

Would it help if I put a side the openblas installation and rerun the test?

  Nadav

From: NumPy-Discussion <numpy-discussion-boun...@scipy.org> on behalf of 
Matthew Brett <matthew.br...@gmail.com>
Sent: 08 February 2016 08:13
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] Multi-distribution Linux wheels - please test

On Sun, Feb 7, 2016 at 10:09 PM, Nadav Horesh <nad...@visionsense.com> wrote:
> Thank you fo reminding me, it is OK now:
> $ python -c 'import numpy; print(numpy.__config__.show())'
>
> lapack_opt_info:
> library_dirs = ['/usr/local/lib']
> language = c
> libraries = ['openblas']
> define_macros = [('HAVE_CBLAS', None)]
> blas_mkl_info:
>   NOT AVAILABLE
> openblas_info:
> library_dirs = ['/usr/local/lib']
> language = c
> libraries = ['openblas']
> define_macros = [('HAVE_CBLAS', None)]
> openblas_lapack_info:
> library_dirs = ['/usr/local/lib']
> language = c
> libraries = ['openblas']
> define_macros = [('HAVE_CBLAS', None)]
> blas_opt_info:
> library_dirs = ['/usr/local/lib']
> language = c
> libraries = ['openblas']
> define_macros = [('HAVE_CBLAS', None)]
> None
>
> I updated openblas to the latest version (0.2.15) and it pass the tests

Oh dear - now I'm confused.  So you installed the wheel, and tested
it, and it gave a test failure.  Then you updated openblas using
pacman, and then reran the tests against the wheel numpy, and they
passed?  That's a bit frightening - the wheel should only see its own
copy of openblas...

Thans for persisting,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Multi-distribution Linux wheels - please test

2016-02-07 Thread Nadav Horesh
Thank you fo reminding me, it is OK now: 
$ python -c 'import numpy; print(numpy.__config__.show())'

lapack_opt_info:
library_dirs = ['/usr/local/lib']
language = c
libraries = ['openblas']
define_macros = [('HAVE_CBLAS', None)]
blas_mkl_info:
  NOT AVAILABLE
openblas_info:
library_dirs = ['/usr/local/lib']
language = c
libraries = ['openblas']
define_macros = [('HAVE_CBLAS', None)]
openblas_lapack_info:
library_dirs = ['/usr/local/lib']
language = c
libraries = ['openblas']
define_macros = [('HAVE_CBLAS', None)]
blas_opt_info:
library_dirs = ['/usr/local/lib']
language = c
libraries = ['openblas']
define_macros = [('HAVE_CBLAS', None)]
None

I updated openblas to the latest version (0.2.15) and it pass the tests

  Nadav.

From: NumPy-Discussion <numpy-discussion-boun...@scipy.org> on behalf of 
Matthew Brett <matthew.br...@gmail.com>
Sent: 08 February 2016 01:33
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] Multi-distribution Linux wheels - please test

Hi,

On Sun, Feb 7, 2016 at 2:06 AM, Nadav Horesh <nad...@visionsense.com> wrote:
> The reult tests of numpy 1.10.4 installed from source:
>
> OK (KNOWNFAIL=4, SKIP=6)
>
>
> I think I use openblas, as it is installed instead the normal blas/cblas.

Thanks again for the further tests.

What do you get for:

python -c 'import numpy; print(numpy.__config__.show())'

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Multi-distribution Linux wheels - please test

2016-02-06 Thread Nadav Horesh
Test platform: python 3.4.1 on archlinux x86_64

scipy test: OK

OK (KNOWNFAIL=97, SKIP=1626)


numpy tests: Failed on long double and int128 tests, and got one error:

Traceback (most recent call last):
  File "/usr/lib/python3.5/site-packages/nose/case.py", line 198, in runTest
self.test(*self.arg)
  File "/usr/lib/python3.5/site-packages/numpy/core/tests/test_longdouble.py", 
line 108, in test_fromstring_missing
np.array([1]))
  File "/usr/lib/python3.5/site-packages/numpy/testing/utils.py", line 296, in 
assert_equal
return assert_array_equal(actual, desired, err_msg, verbose)
  File "/usr/lib/python3.5/site-packages/numpy/testing/utils.py", line 787, in 
assert_array_equal
verbose=verbose, header='Arrays are not equal')
  File "/usr/lib/python3.5/site-packages/numpy/testing/utils.py", line 668, in 
assert_array_compare
raise AssertionError(msg)
AssertionError: 
Arrays are not equal

(shapes (6,), (1,) mismatch)
 x: array([ 1., -1.,  3.,  4.,  5.,  6.])
 y: array([1])

--
Ran 6019 tests in 28.029s

FAILED (KNOWNFAIL=13, SKIP=12, errors=1, failures=18




From: NumPy-Discussion  on behalf of 
Matthew Brett 
Sent: 06 February 2016 22:26
To: Discussion of Numerical Python; SciPy Developers List
Subject: [Numpy-discussion] Multi-distribution Linux wheels - please test

Hi,

As some of you may have seen, Robert McGibbon and Nathaniel have just
guided a PEP for multi-distribution Linux wheels past the approval
process over on distutils-sig:

https://www.python.org/dev/peps/pep-0513/

The PEP includes a docker image on which y'all can build wheels which
match the PEP:

https://quay.io/repository/manylinux/manylinux

Now we're at the stage where we need stress-testing of the built
wheels to find any problems we hadn't thought of.

I've built numpy and scipy wheels here:

https://nipy.bic.berkeley.edu/manylinux/

So, if you have a Linux distribution handy, we would love to hear from
you about the results of testing these guys, maybe on the lines of:

pip install -f https://nipy.bic.berkeley.edu/manylinux numpy scipy
python -c 'import numpy; numpy.test()'
python -c 'import scipy; scipy.test()'

These manylinux wheels should soon be available on pypi, and soon
after, installable with latest pip, so we would like to fix as many
problems as possible before going live.

Cheers,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [OT] Interpolation of an unevently sampled bandwidth limited signal

2016-02-04 Thread Nadav Horesh
Thank you, I'll try this.
Interpolation by the sinc function is equivalent to what yiu get if you'll 
synthesize a smooth function by summing its Fourier component obtained via FFT 
of the data.

  Nadav.


From: NumPy-Discussion <numpy-discussion-boun...@scipy.org> on behalf of Evgeni 
Burovski <evgeny.burovs...@gmail.com>
Sent: 04 February 2016 11:42
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] [OT] Interpolation of an unevently sampled 
bandwidth limited signal

On Thu, Feb 4, 2016 at 9:32 AM, Nadav Horesh <nad...@visionsense.com> wrote:
> I have several cases of hand digitized spectra that I'd like to resample
> these spectra at even spacings. My problem is that cubic or RBF splines
> often result in an unacceptible over-shooting. Is there a python module that
> provides something similar to sinc interpolation on unevenly space sampled
> signal?


There are PCHIP and Akima interpolators in scipy.interpolate, both are
designed to prevent overshooting at the expense of only being
C1-smooth. (No idea about sinc interpolation)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [OT] Interpolation of an unevently sampled bandwidth limited signal

2016-02-04 Thread Nadav Horesh
I have several cases of hand digitized spectra that I'd like to resample these 
spectra at even spacings. My problem is that cubic or RBF splines often result 
in an unacceptible over-shooting. Is there a python module that provides 
something similar to sinc interpolation on unevenly space sampled signal?


   Nadav.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [OT] Interpolation of an unevently sampled bandwidth limited signal

2016-02-04 Thread Nadav Horesh
Excellent! I was looking for nonuniform FFT as a component for the 
interpolation. I am thinking of combining nufft with czt (from scipy) for the 
interpolation.


  Nadav




From: NumPy-Discussion <numpy-discussion-boun...@scipy.org> on behalf of 
Charles R Harris <charlesr.har...@gmail.com>
Sent: 04 February 2016 17:17
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] [OT] Interpolation of an unevently sampled 
bandwidth limited signal



On Thu, Feb 4, 2016 at 4:34 AM, Nadav Horesh 
<nad...@visionsense.com<mailto:nad...@visionsense.com>> wrote:
Thank you, I'll try this.
Interpolation by the sinc function is equivalent to what yiu get if you'll 
synthesize a smooth function by summing its Fourier component obtained via FFT 
of the data.

You might be interested in the NUFFT, see 
https://jakevdp.github.io/blog/2015/02/24/optimizing-python-with-numpy-and-numba/



Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.11.0b1 is out

2016-01-27 Thread Nadav Horesh
Why the dot function/method is slower than @ on python 3.5.1? Tested from the 
latest 1.11 maintenance branch.



np.__version__
Out[39]: '1.11.0.dev0+Unknown'


%timeit A @ c
1 loops, best of 3: 185 µs per loop


%timeit A.dot(c)
1000 loops, best of 3: 526 µs per loop


%timeit np.dot(A,c)
1000 loops, best of 3: 527 µs per loop


A.dtype, A.shape, A.flags
Out[43]: 
(dtype('float32'), (100, 100, 3),   C_CONTIGUOUS : True
   F_CONTIGUOUS : False
   OWNDATA : True
   WRITEABLE : True
   ALIGNED : True
   UPDATEIFCOPY : False)


c.dtype, c.shape, c.flags
Out[44]: 
(dtype('float32'), (3, 3),   C_CONTIGUOUS : True
   F_CONTIGUOUS : False
   OWNDATA : True
   WRITEABLE : True
   ALIGNED : True
   UPDATEIFCOPY : False)





From: NumPy-Discussion  on behalf of 
Charles R Harris 
Sent: 26 January 2016 22:49
To: numpy-discussion; SciPy Developers List; SciPy Users List
Subject: [Numpy-discussion] Numpy 1.11.0b1 is out
  



Hi All,

 I'm pleased to announce that  Numpy 1.11.0b1 is now available on sourceforge. 
This is a source release as the mingw32 toolchain is broken. Please test it out 
and report any errors that you discover. Hopefully we can do better with 1.11.0 
than we did with 1.10.0 ;)

 Chuck

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.10.2rc2 released

2015-12-09 Thread Nadav Horesh
Is is possible that recarray are slow again?


  Nadav



From: NumPy-Discussion  on behalf of 
Charles R Harris 
Sent: 08 December 2015 03:41
To: numpy-discussion; SciPy Developers List; SciPy Users List
Subject: [Numpy-discussion] Numpy 1.10.2rc2 released

Hi All,

I'm pleased to announce the release of Numpy 1.10.2rc2. After two months of 
stomping bugs I think the house is clean and we are almost ready to put it up 
for sale. However, bugs are persistent and may show up at anytime, so please 
inspect and test thoroughly.  Windows binaries and source releases can be found 
at the usual place on 
Sourceforge. If 
there are no reports of problems in the next week I plan to release the final. 
Further bug squashing will be left to the 1.11 release except possibly for 
regressions. The release notes give more detail on the changes.
bon appétit,

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] dot product: large speed difference metween seemingly indentical operations

2015-10-17 Thread Nadav Horesh
The functions dot, matmul and tensordot performs the same on a MxN matrix 
multiplied by length N vector, but very different if the matrix is replaced by 
a PxQxN array. Why?


In [3]: a = rand(100,3)

In [4]: a1 = a.reshape(1000,1000,3)

In [5]: w = rand(3)

In [6]: %timeit a.dot(w)
100 loops, best of 3: 3.47 ms per loop

In [7]: %timeit a1.dot(w)  # Very slow!
10 loops, best of 3: 25.5 ms per loop

In [8]: %timeit a@w
100 loops, best of 3: 3.45 ms per loop

In [9]: %timeit a1@w
100 loops, best of 3: 6.77 ms per loop

In [10]: %timeit tensordot(a,w,1)
100 loops, best of 3: 3.44 ms per loop

In [11]: %timeit tensordot(a1,w,1)
100 loops, best of 3: 3.41 ms per loop

BTW, this is not a corner case, since PxQx3 arrays represent RGB images.

  Nadav

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A regression in numpy 1.10: VERY slow memory mapped file generation

2015-10-14 Thread Nadav Horesh
You right, the delay is not in the memmap:
...
_data = N.memmap(filename, dtype=frame_type, mode=mode, offset=fh_size, 
shape=nframes)
data = _data['data']

The delay is in the 2nd line which selects a field from a recarray.

I use a common drawing application mypaint that uses numpy, and I think it also 
suffers from that delay.

Thank you,
  Nadav

From: NumPy-Discussion <numpy-discussion-boun...@scipy.org> on behalf of Allan 
Haldane <allanhald...@gmail.com>
Sent: 14 October 2015 18:59
To: numpy-discussion@scipy.org
Subject: Re: [Numpy-discussion] A regression in numpy 1.10: VERY slow memory 
mapped file generation

On 10/14/2015 01:23 AM, Nadav Horesh wrote:
>
> I have binary files of size range between few MB to 1GB, which I read process 
> as memory mapped files (via np.memmap). Until numpy 1.9 the creation  of 
> recarray on an existing file (without reading its content) was instantaneous, 
> and now it takes ~6 seconds (system: archlinux on sandy bridge). A profiling 
> (using ipython %prun) top of the list is:
>
>
>ncalls  tottime  percall  cumtime  percall filename:lineno(function)
>213.0370.1454.2660.203 
> _internal.py:372(_check_field_overlap)
>   37134311.6630.0001.6630.000 _internal.py:366()
>   37137500.7900.0000.7900.000 {range}
>   37137090.4060.0000.4060.000 {method 'update' of 'set' 
> objects}
>   3220.3200.0011.9840.006 {method 'extend' of 'list' 
> objects}
>
> Nadav.

Hi Nadav,

The slowdown is due to a problem in PR I introduced to add safety checks
to views of structured arrays (to prevent segfaults involving object
fields), which will hopefully be fixed quickly. It is being discussed here

https://github.com/numpy/numpy/issues/6467

Also, I do not think the problem is with memmap - as far as I have
tested, memmmap is still fast. Most likely what is slowing your script
down is subsequent access to the fields of the array, which is what has
regressed. Is that right?

Allan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] A regression in numpy 1.10: VERY slow memory mapped file generation

2015-10-13 Thread Nadav Horesh

I have binary files of size range between few MB to 1GB, which I read process 
as memory mapped files (via np.memmap). Until numpy 1.9 the creation  of 
recarray on an existing file (without reading its content) was instantaneous, 
and now it takes ~6 seconds (system: archlinux on sandy bridge). A profiling 
(using ipython %prun) top of the list is:


   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
   21    3.037    0.145    4.266    0.203 
_internal.py:372(_check_field_overlap)
  3713431    1.663    0.000    1.663    0.000 _internal.py:366()
  3713750    0.790    0.000    0.790    0.000 {range}
  3713709    0.406    0.000    0.406    0.000 {method 'update' of 'set' objects}
  322    0.320    0.001    1.984    0.006 {method 'extend' of 'list' 
objects}

Nadav.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] np.diag(np.dot(A, B))

2015-05-22 Thread Nadav Horesh
There was an idea on this list to provide a function the run multiple dot on 
several vectors/matrices. It seems to be a particular implementation of this 
proposed function.

  Nadav.

On 22 May 2015 11:58, David Cournapeau courn...@gmail.com wrote:


On Fri, May 22, 2015 at 5:39 PM, Mathieu Blondel 
math...@mblondel.orgmailto:math...@mblondel.org wrote:
Hi,

I often need to compute the equivalent of

np.diag(np.dot(A, B)).

Computing np.dot(A, B) is highly inefficient if you only need the diagonal 
entries. Two more efficient ways of computing the same thing are

np.sum(A * B.T, axis=1)

and

np.einsum(ij,ji-i, A, B).

The first can allocate quite a lot of temporary memory.
The second can be quite cryptic for someone not familiar with einsum.
I assume that einsum does not compute np.dot(A, B), but I haven't verified.

Since this is is quite a recurrent pattern, I was wondering if it would be 
worth adding a dedicated function to NumPy and SciPy's sparse module. A 
possible name would be diagdot. The best performance would be obtained when A 
is C-style and B fortran-style.

Does your implementation use BLAS, or is just a a wrapper around einsum ?

David

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the numpy array into required shape

2014-08-23 Thread Nadav Horesh
Replace

data = data.byteswap()

By

data = data.byteswap()[::-1]

  Nadav

On 23 Aug 2014 09:15, Cleo Drakos cleo21dra...@gmail.com wrote:

Hello numpy users:

I have 2d numpy array of 480 rows and 1440 columns as named by 'data' below:



The first element belongs to (49.875S,179.875W),
the second element belongs to (49.625S,179.625W),
and the last element belongs to (49.875N,179.875E).

import os, glob, gdal, numpy as np

fname = '3B42RT.2014010606.7.bin'

with open(fname, 'rb') as fi:
fi.seek(2880,0)
data = np.fromfile(fi,dtype=np.uint16,count=480*1440)
data = data.byteswap()
data = data.reshape(1440,480)

How can I convert this numpy array so that its first element belongs to 
(49.875N,179.625W), i.e., upper left latitude and longitude respectively; and 
the last element belong to (49.625S,179.875E), i.e., lower right latitute and 
longitude respectively.

I tried to rotate it, but I do not think it is correct.



 data = np.rot90(data,1)

Have some of you experienced with this type of problem? The binary file I am 
using is 
here:ftp://trmmopen.gsfc.nasa.gov/pub/merged/3B42RT/3B42RT.2014010606.7.bin.gz


cleo
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy 1.9b1 bug in pad function?

2014-06-14 Thread Nadav Horesh



In [1]: import numpy as np
In [2]: a = np.arange(4)
In [3]: np.pad(a,2)
---
ValueError    Traceback (most recent call last)
ipython-input-3-f56fe53684b8 in module()
 1 np.pad(a,2)

/usr/lib64/python3.3/site-packages/numpy/lib/arraypad.py in pad(array, 
pad_width, mode, **kwargs)
   1331 elif mode is None:
   1332 raise ValueError('Keyword mode must be a function or one of 
%s.' %
- 1333  (list(allowedkwargs.keys()),))
   1334 else:
   1335 # Drop back to old, slower np.apply_along_axis mode for 
user-supplied

ValueError: Keyword mode must be a function or one of ['edge', 'constant', 
'wrap', 'reflect', 'median', 'maximum', 'minimum', 'symmetric', 'linear_ramp', 
'mean'].

In [4]: np.__version__
Out[4]: '1.9.0b1'

The documentation specify that the mode parameter is optional

I am getting the same for both python 2.7 and 3.3
OS: Gentoo linux

   Nadav

 
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy 1.9b1 bug in pad function?

2014-06-14 Thread Nadav Horesh
This is most likely a documentation error since:



In [7]: np.pad(a)
---
TypeError Traceback (most recent call last)
ipython-input-7-7a0346d77134 in module()
 1 np.pad(a)

TypeError: pad() missing 1 required positional argument: 'pad_width'


Nadav

From: numpy-discussion-boun...@scipy.org numpy-discussion-boun...@scipy.org 
on behalf of Stéfan van der Walt ste...@sun.ac.za
Sent: 14 June 2014 13:39
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] numpy 1.9b1 bug in pad function?

Hi Nadav

On Sat, Jun 14, 2014 at 8:11 AM, Nadav Horesh nad...@visionsense.com wrote:
 In [4]: np.__version__
 Out[4]: '1.9.0b1'

 The documentation specify that the mode parameter is optional

I don't see the optional specification in the docstring.  Perhaps
because mode=None in the signature?

The reason is that then, if you do not specify the signature (as in
your case), you get the following helpful message:

ValueError: Keyword mode must be a function or one of ['reflect',
'linear_ramp', 'edge', 'constant', 'minimum', 'wrap', 'symmetric',
'median', 'maximum', 'mean'].

Instead of

pad() takes exactly 3 arguments (2 given)

Regards
Stéfan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Generalized inner?

2013-03-24 Thread Nadav Horesh
This is what APL's . operator does, and  I found it useful from time to time 
(but I was much younger then).

  Nadav

Jaime Fernández del Río jaime.f...@gmail.com wrote:



The other day I found myself finding trailing edges in binary images doing 
something like this:

arr = np.random.randint(2, size=1000).astype(np.int8)
pattern = np.array([1, 1, 1, 1, 0, 0])
arr_match = 2*arr - 1
pat_match = 2*pattern - 1
from numpy.lib.stride_tricks import as_strided
arr_win = as_strided(arr_match, shape=arr.shape[:-1] + 
(arr.shape[-1]-len(pattern)+1, len(pattern)), 
strides=arr.strides+arr.strides[-1:])
matches = np.einsum('...i, i', arr_win, pat_match) == len(pattern)

While this works fine, this led me to thinking that all this functions (inner, 
dot, einsum, tensordot...) could be generalized to any other ufuncs apart from 
a pointwise np.multiply followed by an np.add reduction.

It would be great if there was a np.gen_inner that allowed something like:

np.gen_inner(arr_win, pattern, pointwise=np.equal, reduce=np.logical_and)

I would like to think that such a generalization would be useful in other 
settings (although I can't think of any right now), and that it could find it's 
place in numpy, rather than in scipy.ndimage or the like. Does this make any 
sense? Is there any already existing way of doing this that I'm overlooking?

Jaime

--
(\__/)
( O.o)
(  ) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes de 
dominación mundial.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Matrix Expontial for differenr t.

2013-01-28 Thread Nadav Horesh
I did not try it, but I assume that you can build a stack of diagonal matrices 
as a MxNxN array and use tensordot with the matrix v (and it's inverse). The 
trivial way to accelerate the loop is to calculate in inverse of v before the 
loop.

   Nadav

From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
on behalf of Till Stensitzki [mail.t...@gmx.de]
Sent: 28 January 2013 18:31
To: numpy-discussion@scipy.org
Subject: [Numpy-discussion] Matrix Expontial for differenr t.

Hi group,
is there a faster way to calculate the
matrix exponential for different t's
than this:

def sol_matexp(A, tlist, y0):
w, v = np.linalg.eig(A)
out = np.zeros((tlist.size, y0.size))
for i, t in enumerate(tlist):
sol_t = np.dot(v,np.diag(np.exp(-w*t))).dot(np.linalg.inv(v)).dot(y0)
out[i, :] =  sol_t
return out

This is the calculates exp(-Kt).dot(y0) for a list a ts.

greetings
Till

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] phase unwrapping (1d)

2013-01-13 Thread Nadav Horesh
There is an unwrap function in numpy. Doesn't it work for you?

   Nadav

From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
on behalf of Neal Becker [ndbeck...@gmail.com]
Sent: 11 January 2013 17:40
To: numpy-discussion@scipy.org
Subject: [Numpy-discussion] phase unwrapping (1d)

np.unwrap was too slow, so I rolled by own (in c++).

I wanted to be able to handle the case of

unwrap (arg (x1) + arg (x2))

Here, phase can change by more than 2pi.

I came up with the following algorithm, any thoughts?

In the following, y is normally set to pi.
o points to output
i points to input
nint1 finds nearest integer

  value_t prev_o = init;
  for (; i != e; ++i, ++o) {
*o = cnt * 2 * y + *i;
value_t delta = *o - prev_o;

if (delta / y  1 or delta / y  -1) {
  int i = nint1int (delta / (2*y));
  *o -= 2*y*i;
  cnt -= i;
}

prev_o = *o;
  }


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Getting C-function pointers from Python to C

2012-04-12 Thread Nadav Horesh

 Example:

 lib = ctypes.CDLL('libm.dylib')
 address_as_integer = ctypes.cast(lib.sin, ctypes.c_void_p).value

 Excellent!

  Sorry for the hijack, thanks for rhe ride,

   Nadav.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Getting C-function pointers from Python to C

2012-04-10 Thread Nadav Horesh
Sorry for being slow.
There is (I think) a related question I raised on the skimage list:
I have a cython function that calls a C callback function in a loop (one call 
for each pixel in an image). The C function in compiled in a different shared 
library (a simple C library, not a python module). I would like a python script 
to get the address of the C function and pass it on to the cython function as 
the pointer for the callback function.

As I understand Travis' isue starts ones the callback address is obtained, but, 
is there a direct method to retrieve the address from the shared library?

   Nadav.

From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of Travis Oliphant [teoliph...@gmail.com]
Sent: 10 April 2012 03:11
To: Discussion of Numerical Python
Subject: [Numpy-discussion] Getting C-function pointers from Python to C

Hi all,

Some of you are aware of Numba.   Numba allows you to create the equivalent of 
C-function's dynamically from Python.   One purpose of this system is to allow 
NumPy to take these functions and use them in operations like ufuncs, 
generalized ufuncs, file-reading, fancy-indexing, and so forth.  There are 
actually many use-cases that one can imagine for such things.

One question is how do you pass this function pointer to the C-side.On the 
Python side, Numba allows you to get the raw integer address of the equivalent 
C-function pointer that it just created out of the Python code.One can 
think of this as a 32- or 64-bit integer that you can cast to a C-function 
pointer.

Now, how should this C-function pointer be passed from Python to NumPy?   One 
approach is just to pass it as an integer --- in other words have an API in C 
that accepts an integer as the first argument that the internal function 
interprets as a C-function pointer.

This is essentially what ctypes does when creating a ctypes function pointer 
out of:

  func = ctypes.CFUNCTYPE(restype, *argtypes)(integer)

Of course the problem with this is that you can easily hand it integers which 
don't make sense and which will cause a segfault when control is passed to this 
function

We could also piggy-back on-top of Ctypes and assume that a ctypes 
function-pointer object is passed in.   This allows some error-checking at 
least and also has the benefit that one could use ctypes to access a c-function 
library where these functions were defined. I'm leaning towards this approach.

Now, the issue is how to get the C-function pointer (that npy_intp integer) 
back and hand it off internally.   Unfortunately, ctypes does not make it very 
easy to get this address (that I can see).There is no ctypes C-API, for 
example.There are two potential options:

1) Create an API for such Ctypes function pointers in NumPy and use the 
ctypes object structure.  If ctypes were to ever change it's object structure 
we would have to adapt this API.

Something like this is what is envisioned here:

 typedef struct {
PyObject_HEAD
char *b_ptr;
 } _cfuncptr_object;

then the function pointer is:

(*((void **)(((_sp_cfuncptr_object *)(obj))-b_ptr)))

which could be wrapped-up into a nice little NumPy C-API call like

void * Npy_ctypes_funcptr(obj)


2) Use the Python API of ctypes to do the same thing.   This has the 
advantage of not needing to mirror the simple _cfuncptr_object structure in 
NumPy but it is *much* slower to get the address.   It basically does the 
equivalent of

ctypes.cast(obj, ctypes.c_void_p).value


There is working code for this in the ctypes_callback branch of my 
scipy fork on github.


I would like to propose two things:

* creating a Npy_ctypes_funcptr(obj) function in the C-API of NumPy and
* implement it with the simple pointer dereference above (option #1)


Thoughts?

-Travis







___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] histogram help

2012-01-31 Thread Nadav Horesh
Do you want a histogramm of z for each (x,y) ?

   Nadav


From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of Ruby Stevenson [ruby...@gmail.com]
Sent: 30 January 2012 21:27
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] histogram help

Sorry, I realize I didn't describe the problem completely clear or correct.

the (x,y) in this case is just many co-ordinates, and  each coordinate
has a list of values (Z value) associated with it.  The bins are
allocated for the Z.

I hope this clarify things a little. Thanks again.

Ruby




On Mon, Jan 30, 2012 at 2:21 PM, Ruby Stevenson ruby...@gmail.com wrote:
 hi, all

 I am trying to figure out how to do histogram with numpy

 I have a three-dimension array A[x,y,z],  another array (bins) has
 been allocated along Z dimension, z'

 how can I get the histogram of H[ x, y, z' ]?

 thanks for your help.

 Ruby
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Strange error raised by scipy.special.erf

2012-01-24 Thread Nadav Horesh
I filed a ticket (#1590).

 Thank you for the verification.

   Nadav.

From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of Pierre Haessig [pierre.haes...@crans.org]
Sent: 24 January 2012 16:01
To: numpy-discussion@scipy.org
Subject: Re: [Numpy-discussion] Strange error raised by scipy.special.erf

Le 22/01/2012 11:28, Nadav Horesh a écrit :
  special.erf(26.5)
 1.0
  special.erf(26.6)
 Traceback (most recent call last):
   File pyshell#7, line 1, in module
 special.erf(26.6)
 FloatingPointError: underflow encountered in erf
  special.erf(26.7)
 1.0

I can confirm this same behaviour with numpy 1.5.1/scipy 0.9.0
Indeed 26.5 and 26.7 works, while 26.6 raises the underflow... weird
enough !
--
Pierre
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Strange error raised by scipy.special.erf

2012-01-22 Thread Nadav Horesh
With N.seterr(all='raise'):

 from scipy import special
 import scipy
 special.erf(26.6)
1.0
 scipy.__version__
'0.11.0.dev-81dc505'
 import numpy as N
 N.seterr(all='raise')
{'over': 'warn', 'divide': 'warn', 'invalid': 'warn', 'under': 'ignore'}
 special.erf(26.5)
1.0
 special.erf(26.6)
Traceback (most recent call last):
  File pyshell#7, line 1, in module
special.erf(26.6)
FloatingPointError: underflow encountered in erf
 special.erf(26.7)
1.0

What is so special in 26.6?
I have this error also with previous versions of scipy

  Nadav.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Counting the Colors of RGB-Image

2012-01-15 Thread Nadav Horesh
im_flat = im0[...,0]*65536 + im[...,1]*256 +im[...,2]
colours = np.unique(im_flat)

   Nadav


From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of Tony Yu [tsy...@gmail.com]
Sent: 15 January 2012 18:03
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] Counting the Colors of RGB-Image



On Sun, Jan 15, 2012 at 10:45 AM, a...@pdauf.demailto:a...@pdauf.de wrote:

Counting the Colors of RGB-Image,
nameit im0 with im0.shape = 2500,3500,3
with this code:

tab0 = zeros( (256,256,256) , dtype=int)
tt = im0.view()
tt.shape = -1,3
for r,g,b in tt:
 tab0[r,g,b] += 1

Question:

Is there a faster way in numpy to get this result?


MfG elodw

Assuming that your image is made up of integer values (which I guess they'd 
have to be if you're indexing into `tab0`), then you could write:

 rgb_unique = set(tuple(rgb) for rgb in tt)

I'm not sure if it's any faster than your loop, but I would assume it is.

-Tony
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: Numexpr 2.0.1 released

2012-01-08 Thread Nadav Horesh
What about python3 support?

 Thanks

Nadav.


From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of Francesc Alted [fal...@gmail.com]
Sent: 08 January 2012 12:49
To: Discussion of Numerical Python; numexpr
Subject: [Numpy-discussion] ANN: Numexpr 2.0.1 released

==
 Announcing Numexpr 2.0.1
==

Numexpr is a fast numerical expression evaluator for NumPy.  With it,
expressions that operate on arrays (like 3*a+4*b) are accelerated
and use less memory than doing the same calculation in Python.

It wears multi-threaded capabilities, as well as support for Intel's
VML library, which allows for squeezing the last drop of performance
out of your multi-core processors.

What's new
==

In this release, better docstrings for `evaluate` and reduction
methods (`sum`, `prod`) is in place.  Also, compatibility with Python
2.5 has been restored (2.4 is definitely not supported anymore).

In case you want to know more in detail what has changed in this
version, see:

http://code.google.com/p/numexpr/wiki/ReleaseNotes

or have a look at RELEASE_NOTES.txt in the tarball.

Where I can find Numexpr?
=

The project is hosted at Google code in:

http://code.google.com/p/numexpr/

You can get the packages from PyPI as well:

http://pypi.python.org/pypi/numexpr

Share your experience
=

Let us know of any bugs, suggestions, gripes, kudos, etc. you may
have.


Enjoy!

--
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Large numbers into float128

2011-10-30 Thread Nadav Horesh
A quick and dirty cython code is attached

Use:

 import Float128
 a = Float128.Float128('1E500')

array([ 1e+500], dtype=float128)

or

 b = np.float128(1.34) * np.float128(10)**2500
 b
1.3400779e+2500


Maybe there is also a way to do it in a pure python code via ctypes?

   Nadav

From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of Charles R Harris [charlesr.har...@gmail.com]
Sent: 30 October 2011 05:02
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] Large numbers into float128



On Sat, Oct 29, 2011 at 8:49 PM, Matthew Brett 
matthew.br...@gmail.commailto:matthew.br...@gmail.com wrote:
Hi,

On Sat, Oct 29, 2011 at 3:55 PM, Matthew Brett 
matthew.br...@gmail.commailto:matthew.br...@gmail.com wrote:
 Hi,

 Can anyone think of a good way to set a float128 value to an
 arbitrarily large number?

 As in

 v = int_to_float128(some_value)

 ?

 I'm trying things like

 v = np.float128(2**64+2)

 but, because (in other threads) the float128 seems to be going through
 float64 on assignment, this loses precision, so although 2**64+2 is
 representable in float128, in fact I get:

 In [35]: np.float128(2**64+2)
 Out[35]: 18446744073709551616.0

 In [36]: 2**64+2
 Out[36]: 18446744073709551618L

 So - can anyone think of another way to assign values to float128 that
 will keep the precision?

To answer my own question - I found an unpleasant way of doing this.

Basically it is this:

def int_to_float128(val):
   f64 = np.float64(val)
   res = val - int(f64)
   return np.float128(f64) + np.float128(res)

Used in various places here:

https://github.com/matthew-brett/nibabel/blob/e18e94c5b0f54775c46b1c690491b8bd6f07eb49/nibabel/floating.py

Best,


It might be useful to look into mpmath. I didn't see any way to export mp 
values into long double, but they do offer a number of resources for working 
with arbitrary precision. We could maybe even borrow some of their stuff for 
parsing values from strings

Chuck
from cython import *
import numpy as np
cimport numpy as np


cdef extern from stdlib.h:
long double strtold(char* number, char** endptr)

def Float128( char *number):
cdef long double num
cdef np.ndarray[np.longdouble_t, ndim=1] output = np.empty(shape=1, 
dtype=np.float128)
num = strtold(number, NULL)
output[0] = num
return output
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] neighborhood iterator speed

2011-10-25 Thread Nadav Horesh
Finally managed to use PyArrayNeighborhoodIter_Next2D with numpy 1.5.0 (in 
numpy 1.6 it doesn't get along with halffloat). Benchmark results (not the same 
computer and parameters I used in the previous benchmark):
1. ...Next2D (zero padding, it doesn't accept mirror padding): 10 sec
2. ...Next (zero padding): 53 sec
3. ...Next (mirror padding): 128 sec

Remarks:
 1. I did not check the validity of the results
 2. Mirror padding is preferable for my specific case.

What does it mean for the potential for the neighbourhood iterator acceleration?

Nadav.


-Original Message-
From: numpy-discussion-boun...@scipy.org 
[mailto:numpy-discussion-boun...@scipy.org] On Behalf Of Nadav Horesh
Sent: Monday, October 24, 2011 9:02 PM
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] neighborhood iterator speed

I found the 2d iterator definition active in numpy 1.6.1. I'll test it.

  Nadav


From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of David Cournapeau [courn...@gmail.com]
Sent: 24 October 2011 16:04
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] neighborhood iterator speed

On Mon, Oct 24, 2011 at 1:23 PM, Nadav Horesh nad...@visionsense.com wrote:
 * I'll try to implement the 2D iterator as far as far as my programming 
 expertise goes. It might take few days.

I am pretty sure the code is in the history, if you are patient enough
to look for it in git history. I can't remember why I removed it
(maybe because it was not faster ?).


 * There is a risk in providing a buffer pointer, and for my (and probably 
 most) use cases it is better for the iterator constructor to provide it. I 
 was thinking about the possibility to give the iterator a shared memory 
 pointer, to open a door for multiprocessing. Maybe it is better instead to 
 provide a contiguous ndarray object to enable a sanity check.

One could ask for an optional buffer (if NULL - auto-allocation). But
I would need a more detailed explanation about what you are trying to
do to warrant changing the API here.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion



__ Information from ESET NOD32 Antivirus, version of virus signature 
database 4628 (20091122) __

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] neighborhood iterator speed

2011-10-24 Thread Nadav Horesh
* Iterator mode: Mirror. Does the mode make a huge difference?
* I can not find any reference to PyArrayNeightborhoodIter_Next2d, where can I 
find it?
* I think that making a copy on reset is (maybe in addition to the creation), 
since there is a reset for every change of the parent iterator, and after this 
change, the neighborhood can be determined.
* What do you think about the following idea?
* A neighbourhood iterator generator that accepts also a buffer to copy in 
the neighbourhood.
* A reset function that would refill the buffer after each parent iterator 
modification

  Nadav


-Original Message-
From: numpy-discussion-boun...@scipy.org 
[mailto:numpy-discussion-boun...@scipy.org] On Behalf Of David Cournapeau
Sent: Monday, October 24, 2011 9:38 AM
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] neighborhood iterator speed

On Mon, Oct 24, 2011 at 6:57 AM, Nadav Horesh nad...@visionsense.com wrote:
 I am trying to replace an old code (biliteral filter) that rely on 
 ndimage.generic_filter with the neighborhood iterator. In the old code, the 
 generic_filter generates a contiguous copy of the neighborhood, thus the 
 (cython) code could use C loop to iterate over the neighbourhood copy. In the 
 new code version the  PyArrayNeighborhoodIter_Next must be called to retrieve 
 every neighbourhood item. The results of rough benchmarking to compare 
 bilateral filtering on a 1000x1000 array:
 Old code (ndimage.generic_filter):  16.5 sec
 New code (neighborhood iteration):  60.5 sec
 New code with PyArrayNeighborhoodIter_Next  omitted: 1.5 sec

 * The last benchmark is not real since the omitted call is a must. It just 
 demonstrates the iterator overhead.
 * I assune the main overhead in the old code is the python function callback 
 process. There are instructions in the manual how to wrap a C code for a 
 faster callback, but I rather use the neighbourhood iterator as I consider it 
 as more generic.


I am afraid the cost is unavoidable: you are really trading cpu for
memory. When using PyArrayNeighborhood_Next, there is a loop with a
condiational within, and I don't think those can easily be avoided
without losing genericity. Which mode are you using when creating the
neighborhood iterator ?

There used to be a PyArrayNeightborhoodIter_Next2d, I don't know why I
commented out. You could try to see if you can get faster.

 If the PyArrayNeighborhoodIter_Reset could (optionally) copy the relevant 
 data (as the generic_filter does) it would provide a major speed up in many 
 cases.

Optionally copying may be an option, but it would make more sense to
do it at creation time than during reset, no ? Something like a binary
and with the current mode flag,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


__ Information from ESET NOD32 Antivirus, version of virus signature 
database 4628 (20091122) __

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] neighborhood iterator speed

2011-10-24 Thread Nadav Horesh
* I'll try to implement the 2D iterator as far as far as my programming 
expertise goes. It might take few days.

* There is a risk in providing a buffer pointer, and for my (and probably most) 
use cases it is better for the iterator constructor to provide it. I was 
thinking about the possibility to give the iterator a shared memory pointer, to 
open a door for multiprocessing. Maybe it is better instead to provide a 
contiguous ndarray object to enable a sanity check.

   Nadav.


-Original Message-
From: numpy-discussion-boun...@scipy.org 
[mailto:numpy-discussion-boun...@scipy.org] On Behalf Of David Cournapeau
Sent: Monday, October 24, 2011 1:57 PM
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] neighborhood iterator speed

On Mon, Oct 24, 2011 at 10:48 AM, Nadav Horesh nad...@visionsense.com wrote:
 * Iterator mode: Mirror. Does the mode make a huge difference?

It could, at least in principle. The underlying translate function is
called often enough that a slight different can be significant.

 * I can not find any reference to PyArrayNeightborhoodIter_Next2d, where can 
 I find it?

I think it would look like:

static NPY_INLINE int
PyArrayNeighborhoodIter_Next2d(PyArrayNeighborhoodIterObject* iter)
{
_PyArrayNeighborhoodIter_IncrCoord2d(iter);
iter-dataptr = iter-translate((PyArrayIterObject*)iter,
iter-coordinates);

return 0;
}

The ...IncrCoord2 macro avoid one loop, which may be useful (or not).
The big issue here is the translate method call that cannot be inlined
because of the polymorphism of neighborhood iterator. But the only
way to avoid this would be to have many different iterators so that
the underlying translate function is known.

Copying the data makes the call to translate unnecessary (but adds the
penalty of one more conditional on every PyArrayNeighborhood_Next.

 * I think that making a copy on reset is (maybe in addition to the creation), 
 since there is a reset for every change of the parent iterator, and after 
 this change, the neighborhood can be determined.

you're right of course, I forgot about the parent iterator.

 * What do you think about the following idea?
* A neighbourhood iterator generator that accepts also a buffer to copy in 
 the neighbourhood.
* A reset function that would refill the buffer after each parent iterator 
 modification

The issue with giving the buffer is that one needs to be carefull
about the size and all. What's your usecase to pass the buffer ?

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


__ Information from ESET NOD32 Antivirus, version of virus signature 
database 4628 (20091122) __

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] neighborhood iterator speed

2011-10-24 Thread Nadav Horesh
My use case is a biliterl filter: It is a convolution-like filter used mainly 
in image-processing, which may use relatively large convolution kernels (in the 
order of 50x50). I would like to run the inner loop (iteration over the 
neighbourhood) with a direct indexing (in a cython code) rather then using the 
slow iterator, in order to save time.

 A separate issue is the new cython's parallel loop that raises the need for 
GIL-free numpy iterators (I might be wrong though). Anyway, it is not urgent 
for me.

  Nadav

-Original Message-
From: numpy-discussion-boun...@scipy.org 
[mailto:numpy-discussion-boun...@scipy.org] On Behalf Of David Cournapeau
Sent: Monday, October 24, 2011 4:04 PM
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] neighborhood iterator speed

On Mon, Oct 24, 2011 at 1:23 PM, Nadav Horesh nad...@visionsense.com wrote:
 * I'll try to implement the 2D iterator as far as far as my programming 
 expertise goes. It might take few days.

I am pretty sure the code is in the history, if you are patient enough
to look for it in git history. I can't remember why I removed it
(maybe because it was not faster ?).


 * There is a risk in providing a buffer pointer, and for my (and probably 
 most) use cases it is better for the iterator constructor to provide it. I 
 was thinking about the possibility to give the iterator a shared memory 
 pointer, to open a door for multiprocessing. Maybe it is better instead to 
 provide a contiguous ndarray object to enable a sanity check.

One could ask for an optional buffer (if NULL - auto-allocation). But
I would need a more detailed explanation about what you are trying to
do to warrant changing the API here.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion



__ Information from ESET NOD32 Antivirus, version of virus signature 
database 4628 (20091122) __

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] neighborhood iterator speed

2011-10-24 Thread Nadav Horesh
I found the 2d iterator definition active in numpy 1.6.1. I'll test it.

  Nadav


From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of David Cournapeau [courn...@gmail.com]
Sent: 24 October 2011 16:04
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] neighborhood iterator speed

On Mon, Oct 24, 2011 at 1:23 PM, Nadav Horesh nad...@visionsense.com wrote:
 * I'll try to implement the 2D iterator as far as far as my programming 
 expertise goes. It might take few days.

I am pretty sure the code is in the history, if you are patient enough
to look for it in git history. I can't remember why I removed it
(maybe because it was not faster ?).


 * There is a risk in providing a buffer pointer, and for my (and probably 
 most) use cases it is better for the iterator constructor to provide it. I 
 was thinking about the possibility to give the iterator a shared memory 
 pointer, to open a door for multiprocessing. Maybe it is better instead to 
 provide a contiguous ndarray object to enable a sanity check.

One could ask for an optional buffer (if NULL - auto-allocation). But
I would need a more detailed explanation about what you are trying to
do to warrant changing the API here.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] array iterators and cython prange

2011-10-23 Thread Nadav Horesh
I coded a bilateral filter class in cython based on numpy's neighborhood 
iterator (thanks to T. J's code example).  I tried to parallel the code by 
replacing the standard loop (commented line 150) by a prange loop (line 151). 
The result are series of compilation errors mainly due the the use of 
iterators. Is there an *easy* way to work with nmpy iterators while the GIL is 
released?

Platfoem: numpy 1.6.1 on python 2.7.2 and cython 0.15.1
system: gcc on linux

   Nadav# A Cython + Neighbourhood based biliteral filter
# Nadav Horesh
###
#
#  Convert double buffer to short
#   After a comparison between the paraller and the serial versions
#


cimport numpy as np
import numpy as np
import cython
from cython.parallel import prange

cdef extern from math.h:
double exp(double x)

cdef extern from math.h:
float expf(float x)
float fabsf(float  x)

cdef extern:
int abs(int x)


   T J UNMODIFIED CODE #

cdef extern from numpy/arrayobject.h:

ctypedef extern class numpy.flatiter [object PyArrayIterObject]:
cdef int nd_m1
cdef np.npy_intp index, size
cdef np.ndarray ao
cdef char *dataptr

# This isn't exposed to the Python API.
# So we can't use the same approach we used to define flatiter
ctypedef struct PyArrayNeighborhoodIterObject:
int nd_m1
np.npy_intp index, size
np.PyArrayObject *ao # note the change from np.ndarray
char *dataptr

object PyArray_NeighborhoodIterNew(flatiter it, np.npy_intp* bounds,
   int mode, np.ndarray fill_value)
int PyArrayNeighborhoodIter_Next(PyArrayNeighborhoodIterObject *it)
int PyArrayNeighborhoodIter_Reset(PyArrayNeighborhoodIterObject *it)

object PyArray_IterNew(object arr)
void PyArray_ITER_NEXT(flatiter it)
np.npy_intp PyArray_SIZE(np.ndarray arr)

cdef enum:
NPY_NEIGHBORHOOD_ITER_ZERO_PADDING,
NPY_NEIGHBORHOOD_ITER_ONE_PADDING,
NPY_NEIGHBORHOOD_ITER_CONSTANT_PADDING,
NPY_NEIGHBORHOOD_ITER_CIRCULAR_PADDING,
NPY_NEIGHBORHOOD_ITER_MIRROR_PADDING

np.import_array()

  END OF T J UNMODIFIED CODE #

cdef int GAUSS_SAMP = 32
cdef int GAUSS_IDX_MAX = GAUSS_SAMP - 1


class Cbilateral_filter(object):
'''
A fully cythonic simple bilateral filtering.

The class provides the bilaterl filter function to be called by
generic_filter.
initialization parameters:
  spat_sig:The sigma of the spatial Gaussian filter
  inten_sig:   The sigma of the gray-levels Gaussian filter
  filter_size: (int) The size of the spatial convolution kernel. If
   not set, it is set to ~ 4*spat_sig.
'''
def __init__(self, float spat_sig, float inten_sig, filter_size=None):
if filter_size is not None and filter_size = 2:
self.xy_size = int(filter_size)
else:
self.xy_size = int(round(spat_sig*4))
# Make filter size odd
self.xy_size += 1-self.xy_size%2
x = np.arange(self.xy_size, dtype=np.float32)
x = (x-x.mean())**2
#xy_ker: Spatial convolution kernel
self.xy_ker = np.exp(-np.add.outer(x,x)/(2*spat_sig**2)).ravel()
self.xy_ker /= self.xy_ker.sum()
self.inten_sig = 2 * inten_sig**2
# self.index in the coordinate of the middle point
self.index = (self.xy_size+1) * (self.xy_size // 2)

## An initialization for LUT instead of a Gaussian function call
## (for the fc_filter method)

x = np.linspace(0,3.0, GAUSS_SAMP)
self.gauss_lut = np.exp(-x**2/2)
self.x_quant = 3*inten_sig / GAUSS_IDX_MAX
self.gauss_lut_float32 = np.exp(-x**2/2).astype(np.float32)


#@cython.boundscheck(False)
#@cython.wraparound(False)
@cython.cdivision(True)
def filterf(self, np.ndarray[dtype=np.float32_t, ndim=2] data not None):

# Define an iterator over the input array (image)
cdef arr_iter = PyArray_IterNew(objectdata)
# xsize, ysize: Input array dimensions
cdef unsigned int ysize = data.shape[0], xsize=data.shape[1]

# Define output array and a c pointer to iterate over it 
cdef np.ndarray[np.float32_t, ndim=2] output = np.empty_like(data)
cdef float *out_ptr = float *output.data

# Get the alreasy initialized spatial and  z Gaussian kernels
cdef np.ndarray[dtype=np.float32_t, ndim=1] kernel = self.xy_ker
cdef np.ndarray[dtype=np.float32_t, ndim=1] gauss_lut_arr = 
self.gauss_lut_float32


# Iterators over the spatial kernel and input data
cdef float *pdata = float *data.data, *pker=float *kernel.data
cdef float *gauss_lut = float *gauss_lut_arr.data  # C pointer to 
iterate over z

# Misc temporary data for the convolution
cdef float sigma

[Numpy-discussion] neighborhood iterator speed

2011-10-23 Thread Nadav Horesh
I am trying to replace an old code (biliteral filter) that rely on 
ndimage.generic_filter with the neighborhood iterator. In the old code, the 
generic_filter generates a contiguous copy of the neighborhood, thus the 
(cython) code could use C loop to iterate over the neighbourhood copy. In the 
new code version the  PyArrayNeighborhoodIter_Next must be called to retrieve 
every neighbourhood item. The results of rough benchmarking to compare 
bilateral filtering on a 1000x1000 array:
Old code (ndimage.generic_filter):  16.5 sec
New code (neighborhood iteration):  60.5 sec
New code with PyArrayNeighborhoodIter_Next  omitted: 1.5 sec

* The last benchmark is not real since the omitted call is a must. It just 
demonstrates the iterator overhead.
* I assune the main overhead in the old code is the python function callback 
process. There are instructions in the manual how to wrap a C code for a faster 
callback, but I rather use the neighbourhood iterator as I consider it as more 
generic.

If the PyArrayNeighborhoodIter_Reset could (optionally) copy the relevant data 
(as the generic_filter does) it would provide a major speed up in many cases.

  Nadav
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Example Usage of Neighborhood Iterator in Cython

2011-10-18 Thread Nadav Horesh
Just in time! I was just working on a cythonic replacement to 
ndimage.generic_filter (well, I took a a short two years break in the middle).

 thank you very much,

Nadav.

From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of T J [tjhn...@gmail.com]
Sent: 17 October 2011 23:16
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] Example Usage of Neighborhood Iterator in   
Cython

On Mon, Oct 17, 2011 at 12:45 PM, eat e.antero.ta...@gmail.com wrote:

 Just wondering what are the main benefits, of your approach, comparing to
 simple:

As I hinted, my goal was not to construct a practical example, but
rather, to demonstrate how to use the neighborhood iterator in Cython.
 Roll and mod are quite nice. :)  Now imagine working with higher
dimensional arrays with more exotic neighborhoods (like the letter X).
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Wrong treatment of byte-order.

2011-08-30 Thread Nadav Horesh
Hi,

 This is my second post on this problem I found in numpy 1.6.1, and recently it 
cam up in the latest  git version (2.0.0.dev-f3e70d9). The problem is numpy 
treats the native byte order ('') as illegal while the wrong one ('') as the 
right one. The output of the attached script (bult for python 2.6 + ) is given 
below (my system is a 64 bit linux on core i7.  64 bit python 2.7.2/3.2 , numpy 
uses ATLAS):

$ python test_byte_order.py
 a =
 [[ 0.28596132  0.31658824  0.34929676]
 [ 0.48739246  0.68020533  0.39616588]
 [ 0.29310406  0.9584545   0.8120068 ]]

 a1 =
 [[ 0.28596132  0.31658824  0.34929676]
 [ 0.48739246  0.68020533  0.39616588]
 [ 0.29310406  0.9584545   0.8120068 ]]

(Wrong byte order on Intel CPUs):
 a2 =
 [[  8.97948198e-017   1.73406416e-025  -4.25909057e+014]
 [  4.59443694e+090   7.91693101e-029   5.26959329e-135]
 [  2.93240450e+060  -2.25898860e-051  -2.06126917e+302]]

Invert a:
OK
 Invert a2 (Wrong byte order!):
OK
 invert a1:
Traceback (most recent call last):
  File test_byte_order.py, line 20, in module
b1 = N.linalg.inv(a1)
  File /usr/lib64/python2.7/site-packages/numpy/linalg/linalg.py, line 445, 
in inv
return wrap(solve(a, identity(a.shape[0], dtype=a.dtype)))
  File /usr/lib64/python2.7/site-packages/numpy/linalg/linalg.py, line 326, 
in solve
results = lapack_routine(n_eq, n_rhs, a, n_eq, pivots, b, n_eq, 0)
lapack_lite.LapackError: Parameter a has non-native byte order in 
lapack_lite.dgesv
from __future__ import print_function
import numpy as N

a = N.random.rand(3,3)
a1 = a.newbyteorder('')
a2 = a.newbyteorder('')
print(' a = \n', a)
print('\n\n a1 = \n', a1)
print('\n\n(Wrong byte order on Intel CPUs):\n a2 =\n', a2)

print('\n\n Invert a:')
b = N.linalg.inv(a)
print('OK')

print('\n Invert a2 (Wrong byte order!):')
b2 = N.linalg.inv(a2)
print('OK')

print('\n invert a1:')
b1 = N.linalg.inv(a1)
print('OK')
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Wrong treatment of byte order?

2011-08-23 Thread Nadav Horesh
My system is a 64 bit gentoo linux on core i7 machine. Numpy version 1.6.1 and 
pyton(s) 2.7.2 and 3.2.1

Problem summary:
 I tried t invert a matrix of explicit little endian byte-order and got an 
error. The inversion run properly with a native byte order, and I get a wrong 
answer with not error message when the matrix is set to big-endian.

mat is a 3x3 float64 array

  import numpy as N

 mat.dtype.byteorder
''
 N.linalg.inv(mat)  # Refuse to ibvert
Traceback (most recent call last):
  File pyshell#107, line 1, in module
N.linalg.inv(mat)
  File /usr/lib64/python2.7/site-packages/numpy/linalg/linalg.py, line 445, 
in inv
return wrap(solve(a, identity(a.shape[0], dtype=a.dtype)))
  File /usr/lib64/python2.7/site-packages/numpy/linalg/linalg.py, line 326, 
in solve
results = lapack_routine(n_eq, n_rhs, a, n_eq, pivots, b, n_eq, 0)
LapackError: Parameter a has non-native byte order in lapack_lite.dgesv

 N.linalg.inv(mat.newbyteorder('='))# OK
array([[ 0.09234453,  0.46163744,  0.2713108 ],
   [ 0.48886135,  0.51230859,  0.2277598 ],
   [ 0.48303131,  0.82571266,  0.17551993]])

 N.linalg.inv(mat.newbyteorder(''))   # WRONG !!!
array([[  2.39051169e-159,  -7.70643158e-157,   5.34087235e-160],
   [  2.11823992e+305,   2.37224043e+307,  -4.31607382e+304],
   [ -1.26608299e+304,  -1.43225563e+306,   7.22233688e+303]])

  Nadav
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] SVD does not converge on clean matrix

2011-08-12 Thread Nadav Horesh
I tested all the the result 3 matrices with alltrue(infinite(mat)) and got True 
answer for all of them.

   Nadav


From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of Warren Weckesser [warren.weckes...@enthought.com]
Sent: 12 August 2011 16:33
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] SVD does not converge on clean matrix



On Fri, Aug 12, 2011 at 4:03 AM, Charanpal Dhanjal 
dhan...@telecom-paristech.frmailto:dhan...@telecom-paristech.fr wrote:
Thank Nadav for testing out the matrix. I wonder if you had a chance to
check if the resulting decomposition contained NaN or Inf values?

As far I understood, numpy.linalg.svd uses routines in LAPACK and ATLAS
(if available) to compute the corresponding SVD. I did some
complementary tests on Debian Squeeze on an Intel Xeon W3550 CPU and the
call to numpy.linalg.svd results in the LinAlgError SVD did not
converge, however the test leading to results containing NaN values ran
on Debian Lenny on an Intel Core 2 Quad. In both of these situations we
use Python 2.7.1 and numpy 1.5.1 (without ATLAS), and so the reasons for
the differences seem to be OS or processor dependent. Any ideas?

Charanpal

Date: Thu, 11 Aug 2011 07:21:09 -0700
 From: Nadav Horesh nad...@visionsense.commailto:nad...@visionsense.com
Subject: Re: [Numpy-discussion] SVD does not converge on clean
matrix
To: Discussion of Numerical Python 
numpy-discussion@scipy.orgmailto:numpy-discussion@scipy.org
Message-ID:

26FC23E7C398A64083C980D16001012D246DFC5F90@VA3DIAXVS361.RED001.local
Content-Type: text/plain; charset=us-ascii


 Had no problem on a gentoo 64 bit machine using atlas 3.8.0 (Core I7,
 python 2.7.2, numpy versions1.60 and 1.6.1)


Another data point: on Mac OS X, with Python 2.7.2 and numpy 1.6.0 (using EPD 
7.1), I get the error:

$ ipython --pylab
Enthought Python Distribution -- www.enthought.comhttp://www.enthought.com

Python 2.7.2 |EPD 7.1-1 (32-bit)| (default, Jul  3 2011, 15:40:35)
Type copyright, credits or license for more information.

IPython 0.11.rc1 -- An enhanced Interactive Python.
? - Introduction and overview of IPython's features.
%quickref - Quick reference.
help  - Python's own help system.
object?   - Details about 'object', use 'object??' for extra details.

Welcome to pylab, a matplotlib-based Python environment [backend: WXAgg].
For more information, type 'help(pylab)'.

In [1]: numpy.__version__
Out[1]: '1.6.0'

In [2]: arr = load('matrix_leading_to_bad_SVD.npz')['arr_0']

In [3]: np.linalg.svd(arr)
---
LinAlgError   Traceback (most recent call last)
/Users/warren/tmp/ipython-input-3-e475bd6de739 in module()
 1 np.linalg.svd(arr)

/Library/Frameworks/Python.framework/Versions/7.1/lib/python2.7/site-packages/numpy/linalg/linalg.py
 in svd(a, full_matrices, compute_uv)
   1319  work, lwork, iwork, 0)
   1320 if results['info']  0:
- 1321 raise LinAlgError, 'SVD did not converge'
   1322 s = s.astype(_realType(result_t))
   1323 if compute_uv:

LinAlgError: SVD did not converge



Warren




  Nadav

On Thu, 11 Aug 2011 15:23:22 +0200, 
dhan...@telecom-paristech.frmailto:dhan...@telecom-paristech.fr
 wrote:
 Hi all,

 I get an error message numpy.linalg.linalg.LinAlgError: SVD did not
 converge when calling numpy.linalg.svd on a clean matrix of size
 (1952,
 895). The matrix is clean in the sense that it contains no NaN or
 Inf
 values. The corresponding npz file is available here:

 https://docs.google.com/leaf?id=0Bw0NXKxxc40jMWEyNTljMWUtMzBmNS00NGZmLThhZWUtY2I2MWU2MGZiNDgxhl=fr

 Here is some information about my setup: I use Python 2.7.1 on
 Ubuntu
 11.04 with numpy 1.6.1. Furthermore, I thought the problem might be
 solved
 by recompiling numpy with my local ATLAS library (version 3.8.3),
 and this
 didn't seem to help. On another machine with Python 2.7.1 and numpy
 1.5.1
 the SVD does converge however it contains 1 NaN singular value and 3
 negative singular values of the order -10^-1 (singular values should
 always be non-negative).

 I also tried computing the SVD of the matrix using Octave 3.2.4 and
 Matlab
 7.10.0.499 (R2010a) 64-bit (glnxa64) and there were no problems. Any
 help
 is greatly appreciated.

 Thanks in advance,
 Charanpal

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.orgmailto:NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] SVD does not converge on clean matrix

2011-08-11 Thread Nadav Horesh

Had no problem on a gentoo 64 bit machine using atlas 3.8.0 (Core I7, python 
2.7.2, numpy versions1.60 and 1.6.1)

  Nadav

From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of dhan...@telecom-paristech.fr [dhan...@telecom-paristech.fr]
Sent: 11 August 2011 16:23
To: numpy-discussion@scipy.org
Subject: [Numpy-discussion] SVD does not converge on clean matrix

Hi all,

I get an error message numpy.linalg.linalg.LinAlgError: SVD did not
converge when calling numpy.linalg.svd on a clean matrix of size (1952,
895). The matrix is clean in the sense that it contains no NaN or Inf
values. The corresponding npz file is available here:
https://docs.google.com/leaf?id=0Bw0NXKxxc40jMWEyNTljMWUtMzBmNS00NGZmLThhZWUtY2I2MWU2MGZiNDgxhl=fr

Here is some information about my setup: I use Python 2.7.1 on Ubuntu
11.04 with numpy 1.6.1. Furthermore, I thought the problem might be solved
by recompiling numpy with my local ATLAS library (version 3.8.3), and this
didn't seem to help. On another machine with Python 2.7.1 and numpy 1.5.1
the SVD does converge however it contains 1 NaN singular value and 3
negative singular values of the order -10^-1 (singular values should
always be non-negative).

I also tried computing the SVD of the matrix using Octave 3.2.4 and Matlab
7.10.0.499 (R2010a) 64-bit (glnxa64) and there were no problems. Any help
is greatly appreciated.

Thanks in advance,
Charanpal



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix inversion

2011-08-10 Thread Nadav Horesh
The matrix in singular, so you can not expect a stable inverse.

   Nadav.


From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of jp d [yo...@yahoo.com]
Sent: 11 August 2011 03:50
To: numpy-discussion@scipy.org
Subject: [Numpy-discussion] matrix inversion

hi,
i am trying to invert matrices like this:
[[ 0.01643777 -0.13539939  0.11946689]
 [ 0.12479926  0.01210898 -0.09217618]
 [-0.13050087  0.07575163  0.01144993]]

in perl using Math::MatrixReal;
and in various online calculators i get
[  2.472715991745  3.680743681735 -3.831392002314 ]
[ -4.673105249083 -5.348238625096 -5.703193038649 ]
[  2.733966489601 -6.567940452290 -5.936617926811 ]

using python , numpy and linalg.inv (or linalg.pinv) i get  a divergent answer
[[  6.79611151e+07   1.01163031e+08   1.05303510e+08]
 [  1.01163057e+08   1.50585545e+08   1.56748838e+08]
 [  1.05303548e+08   1.56748831e+08   1.63164381e+08]]

any suggestions?

thanks
jpd
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] lazy loading ndarrays

2011-07-26 Thread Nadav Horesh
For lazy data loading I use memory-mapped array (numpy.memmap): I use it to 
process multi-image files that are much larger than the available RAM.

   Nadav.


From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of Craig Yoshioka [crai...@me.com]
Sent: 27 July 2011 05:41
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] lazy loading ndarrays

ok, that was an alternative strategy I was going to try... but not my favorite 
as I'd have to explicitly perform all operations on the data portion of the 
object, and given numpy's mechanics, assignment would also have to be explicit, 
and creating new image objects implicitly would be trickier:

image3 = Image(image1)
image3.data = ( image1.data + 19.0 ) * image2.data

vs.

image3 = ( image1 + 19 ) * image2

I suppose option A isn't that bad though and getting lazy loading would be very 
straightforward

--

On a side note, I prefer this construct for lazy operations... curious to see 
what people's reactions are, ie: that's horrible!

class lazy_property(object):
'''
meant to be used for lazy evaluation of object attributes.
should represent non-mutable return value, as whatever is returned replaces 
itself permanently.
'''

def __init__(self,fget):
self.fget = fget


def __get__(self,obj,cls):
value = self.fget(obj)
setattr(obj,self.fget.func_name,value)
return value


class DataFormat(object):
def __init__(self,loader):
self.loadData = loader
@lazy_property
def data(self):
return self.loadData()



On Jul 26, 2011, at 5:45 PM, Joe Kington wrote:

Similar to what Matthew said, I often find that it's cleaner to make a seperate 
class with a data (or somesuch) property that lazily loads the numpy array.

For example, something like:

class DataFormat(object):
def __init__(self, filename):
self.filename = filename
for key, value in self._read_header().iteritems():
setattr(self, key, value)

@property
def data(self):
try:
return self._data
except AttributeError:
self._data = self._read_data()
return self._data

Hope that helps,
-Joe

On Tue, Jul 26, 2011 at 4:15 PM, Matthew Brett 
matthew.br...@gmail.commailto:matthew.br...@gmail.com wrote:
Hi,

On Tue, Jul 26, 2011 at 5:11 PM, Craig Yoshioka 
crai...@me.commailto:crai...@me.com wrote:
 I want to subclass ndarray to create a class for image and volume data, and 
 when referencing a file I'd like to have it load the data only when accessed. 
  That way the class can be used to quickly set and manipulate header values, 
 and won't load data unless necessary.  What is the best way to do this?  Are 
 there any hooks I can use to load the data when an array's values are first 
 accessed or manipulated?  I tried some trickery with __array_interface__ but 
 couldn't get it to work very well.  Should I just use a memmapped array, and 
 give up on a purely 'lazy' approach?

What kind of images are you loading?   We do lazy loading in nibabel,
for medical image type formats:

http://nipy.sourceforge.net/nibabel/

- but our images _have_ arrays and headers, rather than (appearing to
be) arrays.  Thus something like:

import nibabel as nib

img = nib.load('my_image.img')
# data not loaded at this point
data = img.get_data()
# data loaded now.  Maybe memmapped if the format allows

If you think you might have similar needs, I'd be very happy to help
you get going in nibabel...

Best,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.orgmailto:NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.orgmailto:NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array vectorization in numpy

2011-07-19 Thread Nadav Horesh
For such expressions you should try numexpr package: It allows the same type of 
optimisation as Matlab does:  run a single loop over the matrix elements 
instead of repetitive loops and intermediate objects creation.

  Nadav

 Besides the matlab/numpy comparison, I think that there is an inherent 
 problem with how expressions are handled, in terms of efficiency.
 For instance, k = (m - 0.5)*0.3 takes 52msec average here (2000x2000 array), 
 while k = (m - 0.5)*0.3*0.2 takes 0.079, and k = (m - 0.5)*0.3*0.2*0.1  
 takes 101msec.
 Placing parentheses around the scalar multipliers shows that it seems to have 
 to do with how expressions are handled, is there sometihng that can be done 
 about this so that numpy can deal with expressions rather than single 
 operations chained by python itself? Maybe I am missing the point as well.

--
Carlos Becker
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Can not compile documentation with python3.2

2011-07-14 Thread Nadav Horesh
I installed numpy-1.6.1-rc3 on python3.2, and used the python3 sphinx port 
(version 1.1pre) to compile the documentation and got this error:



nadav@nadav /dev/shm/numpy-1.6.1rc3/doc $ make latex
mkdir -p build
touch build/generate-stamp
mkdir -p build/latex build/doctrees
LANG=C sphinx-build -b latex -d build/doctrees   source build/latex
Running Sphinx v1.1pre
1.6rc3 1.6.1rc3

Exception occurred:
  File 
/usr/lib64/python3.2/site-packages/Sphinx-1.1predev_20110713-py3.2.egg/sphinx/application.py,
 line 247, in setup_extension
mod = __import__(extension, None, None, ['setup'])
  File /dev/shm/numpy-1.6.1rc3/doc/sphinxext/numpydoc.py, line 37
title_re = re.compile(ur'^\s*[#*=]{4,}\n[a-z0-9 -]+\n[#*=]{4,}\s*',
 ^
SyntaxError: invalid syntax





Platform: 64bit gentoo linux

   Nadav.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] תשובה: Recommndations for an easy GUI

2011-06-28 Thread Nadav Horesh
I tried Root’s advice and with the get_data method and GTK (without Agg) I got 
decent speed -- 30 fps (display speed,  without the calculations overhead). The 
combination of matplotlib and glumpy resulted in 88 fps.

 I think I’ll have a solutionif glumpy lack of documentation will net get in 
the way.

 Thank you all for the useful advices,

Nadav.


מאת: numpy-discussion-boun...@scipy.org 
[mailto:numpy-discussion-boun...@scipy.org] בשם Nicolas Rougier
נשלח: Tuesday, June 28, 2011 09:47
אל: Discussion of Numerical Python
נושא: Re: [Numpy-discussion] Recommndations for an easy GUI



Have a look at glumpy: http://code.google.com/p/glumpy/
It's quite simple and very fast for images (it's based on OpenGL/shaders).

Nicolas


On Jun 28, 2011, at 6:38 AM, Nadav Horesh wrote:


I have an application which generates and displays RGB images as rate of 
several frames/seconds (5-15). Currently I use Tkinter+PIL, but I have a 
problem that it slows down the rate significantly. I am looking for a fast and 
easy alternative.

Platform: Linux
I prefer tools that would work also with python3

I looked at the following:
1. matplotlib's imshow: Easy but too slow.
2. pyqwt: Easy, fast, does not support rgb images (only grayscale)
3. pygame: Nice but lacking widgets like buttons etc.
4. PyQT: Can do, I would like something simpler (I'll rather focus on 
computations, not GUI)

   Nadav.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.orgmailto:NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Recommndations for an easy GUI

2011-06-27 Thread Nadav Horesh
I have an application which generates and displays RGB images as rate of 
several frames/seconds (5-15). Currently I use Tkinter+PIL, but I have a 
problem that it slows down the rate significantly. I am looking for a fast and 
easy alternative.

Platform: Linux
I prefer tools that would work also with python3

I looked at the following:
1. matplotlib's imshow: Easy but too slow.
2. pyqwt: Easy, fast, does not support rgb images (only grayscale)
3. pygame: Nice but lacking widgets like buttons etc.
4. PyQT: Can do, I would like something simpler (I'll rather focus on 
computations, not GUI)

   Nadav.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] תשובה: faster in1d() for monotonic case?

2011-06-21 Thread Nadav Horesh
Did you try searchsorted?

  Nadav


מאת: numpy-discussion-boun...@scipy.org 
[mailto:numpy-discussion-boun...@scipy.org] בשם Michael Katz
נשלח: Tuesday, June 21, 2011 10:06
אל: Discussion of Numerical Python
נושא: [Numpy-discussion] faster in1d() for monotonic case?

The following call is a bottleneck for me:

np.in1d( large_array.field_of_interest, values_of_interest )

I'm not sure how in1d() is implemented, but this call seems to be slower than 
O(n) and faster than O( n**2 ), so perhaps it sorts the values_of_interest and 
does a binary search for each element of large_array?

In any case, in my situation I actually know that field_of_interest increases 
monotonically across the large_array. So if I were writing this in C, I could 
do a simple O(n) loop by sorting values_of_interest and then just checking each 
value of large_array against values_of_interest[ i ] and values_of_interest[ i 
+ 1 ], and any time it matched values_of_interest[ i + 1 ] increment i.

Is there some way to achieve that same efficiency in numpy, taking advantage of 
the monotonic nature of field_of_interest?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.load truncates read from network file on XP

2011-04-28 Thread Nadav Horesh
Several time I encountered problems in transfering large files between XP 
stations on a wireless network. Could be a result of the unsafe UDP prtocol 
used by microsoft network protocol (do not have a vista/7 machines to test it).

  Nadav


From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of Bruce Southey [bsout...@gmail.com]
Sent: 29 April 2011 04:56
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] numpy.load truncates read from network file 
on XP

On Thu, Apr 28, 2011 at 4:22 PM, Dan Halbert halb...@halwitz.org wrote:
 I'm having trouble loading a large remote .npy file on Windows XP. This is on 
 numpy-1.3.0 on Windows XP SP3:

numpy.load(r'\\myserver\mydir\big.npy')

 will fail with this sort of error being printed:
14328000 items requested but only 54 read
 and then I get this with a backtrace:
ValueError: total size of new array must be unchanged (due to the 
 truncated array)

 The file big.npy is a big 2d array, about 112MB.

 The same file when stored locally gives no error when read. I can also read 
 it into an editor, or copy it, and I get the whole thing.

 More strangely, the same file when read from the same UNC path on Windows 7 
 64-bit (with the same 32-bit versions of all Python-related software) does 
 not give an error either.

 The fileserver in question is a NetApp box. Any clues? My websearching hasn't 
 netted any leads.

 Thanks a lot,
 Dan



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

Don't know what a 'NetApp box' is, what OS it runs or what are the
filesystem types being used. Really we need more information
especially what the file is and how it was created etc., as well as
the different OSes involved with the exact Python and numpy versions
on each system - numpy 1.3 is getting a little old now with 1.6 due
soon.

A previous error has been the different line endings between OS's when
transferring between Linux and windows. That should show up with a
smaller version of the file. So at least try finding the smallest file
that you gives an error.

Other other issue may be connection related in that Python is not
getting the complete file so you might want to read the file directly
from Python first or change some of XP's virtual memory settings.
There have been significant changes between XP and Win 7 in that
regard:
http://en.wikipedia.org/wiki/Comparison_of_Windows_Vista_and_Windows_XP
File copy operations proved to be one area where Vista performs
better than XP. A 1.25 GB file was copied from a network share to each
desktop. For XP, it took 2 minutes and 54 seconds, for Vista with SP1
it took 2 minutes and 29 seconds. This test was done by CRN Test
Center, but it omitted the fact that a machine running Vista takes
circa one extra minute to boot, if compared to a similar one operating
XP. However, the Vista implementation of the file copy is arguably
more complete and correct as the file does not register as being
transferred until it has completely transferred. In Windows XP, the
file completed dialogue box is displayed prior to the file actually
finishing its copy or transfer, with the file completing after the
dialogue is displayed. This can cause an issue if the storage device
is ejected prior to the file being successfully transferred or copied
in windows XP due to the dialogue box's premature prompt.


Bruce
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] netcdf to grib format

2011-04-08 Thread Nadav Horesh
Wikipedia has this link

http://www.pyngl.ucar.edu/Nio.shtml

 Does it do the job?

  Nadav


From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of dileep kunjaai [dileepkunj...@gmail.com]
Sent: 08 April 2011 15:21
To: Discussion of Numerical Python
Subject: [Numpy-discussion] netcdf to grib format

Dear sir,
 Is there any tool for changing the  'netcdf' file format to 'grib' format in 
python or cdat

--
DILEEPKUMAR. R
J R F, IIT DELHI

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Partial least squares

2011-03-24 Thread Nadav Horesh
I am looking for a partial least sqaures code refactoring for two (X,Y) 
matrices. I found the following, but they not not work for me:
1. MDP: Factors only one matrix (am I wrong?)
2. pychem: Windows only code (I use Linux)
3. chemometrics from Koders: I get a singularity error.
4. pca_module (By Risvik): same problem as MDP

Any suggestion?

   Nadav.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Partial least squares

2011-03-24 Thread Nadav Horesh
Yes, he is. I'll work on it early next week, and post any comment I'll have.
Documentation quality: The code doc refer to an excellent reference [Wegelin et 
al. 2000], so no real problem here.  If the reference is critical I would 
suggest on of the following:
1. Put a link to the document.
2. Is it possible to copy the paper to the project site?  As the paper becomes 
old (on the internet time scale), it becomes volatile, and may vanish while the 
project is still alive.

  Nadav.


From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of Gael Varoquaux [gael.varoqu...@normalesup.org]
Sent: 24 March 2011 22:04
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] Partial least squares

On Thu, Mar 24, 2011 at 08:15:12PM +0100, Olivier Grisel wrote:
 2011/3/24 Nadav Horesh nad...@visionsense.com:
  I am looking for a partial least sqaures code refactoring for two (X,Y)
  matrices. I found the following, but they not not work for me:
  1. MDP: Factors only one matrix (am I wrong?)
  2. pychem: Windows only code (I use Linux)
  3. chemometrics from Koders: I get a singularity error.
  4. pca_module (By Risvik): same problem as MDP

  Any suggestion?

 There is one in the master of scikits.learn:

   
 https://github.com/scikit-learn/scikit-learn/blob/master/scikits/learn/pls.py
   
 https://github.com/scikit-learn/scikit-learn/blob/master/examples/plot_pls.py

Olivier shoots faster than I do :)

G
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [OT] any image io module that works with python3?

2011-03-15 Thread Nadav Horesh
Just downloaded and got the same problems. I'll try to debug and provide a 
decent analysis, but it would take some days as I am busy with other things.

  Thank you,
  Nadav.


From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of Christoph Gohlke [cgoh...@uci.edu]
Sent: 14 March 2011 21:55
To: numpy-discussion@scipy.org
Subject: Re: [Numpy-discussion] [OT] anyimage   io  module  that
works   withpython3?

On 3/12/2011 9:56 PM, Nadav Horesh wrote:
 This lead to another error probably due to line 68 in map.h. As much as I 
 could trace it, ob_type is a member of PyObject,  not of PyTypeObject. I have 
 no clue how to resolve this.

I just tried PIL-1.1.7-py3 on an Ubuntu 64 bit system: after one change
to map.c it builds OK (without sane support) and passes all tests but
one numpy int64 bit related test. Please download again from
http://www.lfd.uci.edu/~gohlke/pythonlibs/#pil to make sure you have
the same sources and try build again.

Christoph


  Nadav.
 
 From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
 On Behalf Of Christoph Gohlke [cgoh...@uci.edu]
 Sent: 13 March 2011 00:37
 To: numpy-discussion@scipy.org
 Subject: Re: [Numpy-discussion] [OT] any image io   module  thatworks 
   withpython3?

 On 3/12/2011 12:47 PM, Nadav Horesh wrote:
 After the  replacement of ö with o, the installation went without errors, 
 but:

 nadav@nadav_home ~ $ python3
 Python 3.1.3 (r313:86834, Feb 25 2011, 11:08:33)
 [GCC 4.4.4] on linux2
 Type help, copyright, credits or license for more information.
 import _imaging
 Traceback (most recent call last):
 File stdin, line 1, inmodule
 ImportError: /usr/lib64/python3.1/site-packages/PIL/_imaging.so: undefined 
 symbol: Py_FindMethod

 Py_FindMethod should be excluded by `#ifndef PY3` or similar
 preprocessor statements. There is a typo in map.c line 65: change
 `#ifdef PY3` to `#ifndef PY3` and clean your build directory before
 rebuilding.

 Christoph


Thank you,

   Nadav.
 
 From: numpy-discussion-boun...@scipy.org 
 [numpy-discussion-boun...@scipy.org] On Behalf Of Christoph Gohlke 
 [cgoh...@uci.edu]
 Sent: 12 March 2011 21:49
 To: numpy-discussion@scipy.org
 Subject: Re: [Numpy-discussion] [OT] any image io module that   works   with 
python3?

 On 3/12/2011 8:45 AM, Nadav Horesh wrote:
 I forgot to mention that I work on linux (gentoo x86-64).Here are my 
 achievements till now:

 1. PythonMagick: Needs boost which I do not have it avaiable on python3
 Boost works on Python 3.1. You might need to compile it.
 2. Pygame: I have the stable version(1.9.1) should it work?
 You need the developer version from svn.
 3. FreeImage: I installed FreeImagePy on python3, but it doesn't work yet.
 FreeImagePy is unmaintained, does not work on Python 3, and has problems
 on 64 bit platforms. Just wrap the functions you need in ctypes.
 4. PIL: I patched setup.py and map.c so python3 setup.py build is 
 working, but:
 Try replace Hans Häggström with Hans Haggstrom in PIL/WalImageFile.py

 Christoph


 nadav@nadav_home /dev/shm/PIL-1.1.7-py3 $ sudo python3.1  setup.py install
 /usr/lib64/python3.1/distutils/dist.py:259: UserWarning: Unknown 
 distribution option: 'ext_comp_args'
  warnings.warn(msg)
 running install
 running build
 running build_py
 running build_ext
 
 PIL 1.1.7 SETUP SUMMARY
 
 version   1.1.7
 platform  linux2 3.1.3 (r313:86834, Feb 25 2011, 11:08:33)
  [GCC 4.4.4]
 
 --- TKINTER support available
 --- JPEG support available
 --- ZLIB (PNG/ZIP) support available
 --- FREETYPE2 support available
 --- LITTLECMS support available

 .
 .
 .

 byte-compiling /usr/lib64/python3.1/site-packages/PIL/WalImageFile.py to 
 WalImageFile.pyc
 Traceback (most recent call last):
  File setup.py, line 520, inmodule
setup(*(), **configuration)  # old school :-)
  File /usr/lib64/python3.1/distutils/core.py, line 149, in setup
dist.run_commands()
  File /usr/lib64/python3.1/distutils/dist.py, line 919, in 
 run_commands
self.run_command(cmd)
  File /usr/lib64/python3.1/distutils/dist.py, line 938, in run_command
cmd_obj.run()
  File /usr/lib64/python3.1/distutils/command/install.py, line 592, in 
 run
self.run_command(cmd_name)
  File /usr/lib64/python3.1/distutils/cmd.py, line 315, in run_command
self.distribution.run_command(command)
  File /usr/lib64/python3.1/distutils/dist.py, line 938, in run_command
cmd_obj.run()
  File /usr/lib64/python3.1/distutils/command/install_lib.py, line 98, 
 in run

[Numpy-discussion] תשובה: [OT] any image io module that works with python3?

2011-03-14 Thread Nadav Horesh
The instillation is OK. The problem is that on my wok PC I do not have PIL 
installed. So:
In [6]: import scikits.image.io as io
---
ImportError   Traceback (most recent call last)
/home/nadav/ipython-input-6-62f17e91233f in module()
 1 import scikits.image.io as io

/usr/lib64/python3.1/site-packages/scikits.image-0.3dev-py3.1-linux-x86_64.egg/scikits/image/io/__init__.py
 in module()
 11 # Add this plugin so that we can read images by default

 12 use_plugin('null')
--- 13 use_plugin('pil')
 14 
 15 from .sift import *

/usr/lib64/python3.1/site-packages/scikits.image-0.3dev-py3.1-linux-x86_64.egg/scikits/image/io/_plugins/plugin.py
 in use(name, kind)
122 
123 if not name in available(loaded=True):
-- 124 _load(name)
125 
126 for k in kind:

/usr/lib64/python3.1/site-packages/scikits.image-0.3dev-py3.1-linux-x86_64.egg/scikits/image/io/_plugins/plugin.py
 in _load(plugin)
178 modname = plugin_module_name[plugin]
179 plugin_module = __import__('scikits.image.io._plugins.' + 
modname,
-- 180fromlist=[modname])
181 
182 provides = plugin_provides[plugin]

/usr/lib64/python3.1/site-packages/scikits.image-0.3dev-py3.1-linux-x86_64.egg/scikits/image/io/_plugins/pil_plugin.py
 in module()
  6 from PIL import Image
  7 except ImportError:
 8 raise ImportError(The Python Image Library could not be found. 
  9   Please refer to http://pypi.python.org/pypi/PIL/ 

 10   for further instructions.)

ImportError: The Python Image Library could not be found. Please refer to 
http://pypi.python.org/pypi/PIL/ for further instructions.

Shouldn't it skip quietly on missing plugins?
(It is easy to bypass by a patch, but  I am sure you has some design 
considerations here.

  Nadav.


-הודעה מקורית-
מאת: numpy-discussion-boun...@scipy.org 
[mailto:numpy-discussion-boun...@scipy.org] בשם Stéfan van der Walt
נשלח: Monday, March 14, 2011 00:16
אל: Discussion of Numerical Python
נושא: Re: [Numpy-discussion] [OT] any image io module that works with python3?

Hi Nadav

On Sun, Mar 13, 2011 at 8:20 PM, Nadav Horesh nad...@visionsense.com wrote:
 Jest tested the installation (after git clone ...). I had to correct the 
 following lines in _build.py to pass installation:
 lines 72, and 75 should be:
    f0 = open(f0,'br')
    f1 = open(f1,'br')

Thanks so much for testing and for the patch; I've pushed your changes:

https://github.com/stefanv/scikits.image/commit/b47ae98ffb92e2de33d9e530201e402e04d865d3

Are you able to load images now?

Cheers
Stéfan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] תשובה: [OT] any image io module that works with python3?

2011-03-14 Thread Nadav Horesh
After the following corrections, it works on both 2.7 and 3.1 python versions. 
My limited test included:
python2.7: imread and imshow usng pil, freeimage and qt
python3.1 imread via freeimage and imshow via qt

 Thank you very much,
   Nadav

(The file _build.patch is a patch for _build.py)



From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of Stéfan van der Walt [ste...@sun.ac.za]
Sent: 14 March 2011 17:09
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] תשובה: [OT] any image io module that works with 
python3?

On Mon, Mar 14, 2011 at 9:55 AM, Nadav Horesh nad...@visionsense.com wrote:
 The instillation is OK. The problem is that on my wok PC I do not have PIL 
 installed. So:

Thanks, you are right of course: no plugin should be required upon
import.  I now put the use_plugin statement inside a try: except:
block.

Cheers
Stéfan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion



freeimage_plugin.py
Description: freeimage_plugin.py


_build.patch
Description: _build.patch
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [OT] any image io module that works with python3?

2011-03-13 Thread Nadav Horesh
Jest tested the installation (after git clone ...). I had to correct the 
following lines in _build.py to pass installation:
lines 72, and 75 should be:
f0 = open(f0,'br')
f1 = open(f1,'br')

  Nadav.

From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of Stéfan van der Walt [ste...@sun.ac.za]
Sent: 13 March 2011 17:19
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] [OT] any image io module that works with
python3?

On Sat, Mar 12, 2011 at 2:35 PM, Zachary Pincus zachary.pin...@yale.edu wrote:
 Here's a ctypes interface to FreeImage that I wrote a while back and
 was since cleaned up (and maintained) by the scikits.image folk:

https://github.com/stefanv/scikits.image/blob/master/scikits/image/io/_plugins/freeimage_plugin.py

 If it doesn't work out of the box on python 3, then it should be
 pretty simple to fix.

I now fixed the scikits.image build process under Python 3, so this
should be easy to try out.

Regards
Stéfan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [OT] any image io module that works with python3?

2011-03-12 Thread Nadav Horesh
Having numpy, scipy, and matplotlib working reasonably with python3, a major 
piece of code I miss for a major python3 migration is an image IO. I found that 
pylab's imread works fine for png image, but I need to read all the other image 
format as well as png and jpeg output.

 Any hints (including advices how easyly construct my own module) are 
appreciated.

   Nadav.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [OT] any image io module that works with python3?

2011-03-12 Thread Nadav Horesh
I forgot to mention that I work on linux (gentoo x86-64).Here are my 
achievements till now:

1. PythonMagick: Needs boost which I do not have it avaiable on python3
2. Pygame: I have the stable version(1.9.1) should it work?
3. FreeImage: I installed FreeImagePy on python3, but it doesn't work yet.
4. PIL: I patched setup.py and map.c so python3 setup.py build is working, 
but:

nadav@nadav_home /dev/shm/PIL-1.1.7-py3 $ sudo python3.1  setup.py install
/usr/lib64/python3.1/distutils/dist.py:259: UserWarning: Unknown distribution 
option: 'ext_comp_args'
  warnings.warn(msg)
running install
running build
running build_py
running build_ext

PIL 1.1.7 SETUP SUMMARY

version   1.1.7
platform  linux2 3.1.3 (r313:86834, Feb 25 2011, 11:08:33)
  [GCC 4.4.4]

--- TKINTER support available
--- JPEG support available
--- ZLIB (PNG/ZIP) support available
--- FREETYPE2 support available
--- LITTLECMS support available

.
.
.

byte-compiling /usr/lib64/python3.1/site-packages/PIL/WalImageFile.py to 
WalImageFile.pyc
Traceback (most recent call last):
  File setup.py, line 520, in module
setup(*(), **configuration)  # old school :-)
  File /usr/lib64/python3.1/distutils/core.py, line 149, in setup
dist.run_commands()
  File /usr/lib64/python3.1/distutils/dist.py, line 919, in run_commands
self.run_command(cmd)
  File /usr/lib64/python3.1/distutils/dist.py, line 938, in run_command
cmd_obj.run()
  File /usr/lib64/python3.1/distutils/command/install.py, line 592, in run
self.run_command(cmd_name)
  File /usr/lib64/python3.1/distutils/cmd.py, line 315, in run_command
self.distribution.run_command(command)
  File /usr/lib64/python3.1/distutils/dist.py, line 938, in run_command
cmd_obj.run()
  File /usr/lib64/python3.1/distutils/command/install_lib.py, line 98, in run
self.byte_compile(outfiles)
  File /usr/lib64/python3.1/distutils/command/install_lib.py, line 135, in 
byte_compile
dry_run=self.dry_run)
  File /usr/lib64/python3.1/distutils/util.py, line 560, in byte_compile
compile(file, cfile, dfile)
  File /usr/lib64/python3.1/py_compile.py, line 137, in compile
codestring = f.read()
  File /usr/lib64/python3.1/codecs.py, line 300, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xe4 in position 1909: 
invalid continuation byte



Any idea on how to correct it? Any elegant way to avoid byte compiling?

  Nadav



From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of Zachary Pincus [zachary.pin...@yale.edu]
Sent: 12 March 2011 14:35
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] [OT] any image io module that works with
python3?

Here's a ctypes interface to FreeImage that I wrote a while back and
was since cleaned up (and maintained) by the scikits.image folk:

https://github.com/stefanv/scikits.image/blob/master/scikits/image/io/_plugins/freeimage_plugin.py

If it doesn't work out of the box on python 3, then it should be
pretty simple to fix.

Zach



On Mar 12, 2011, at 4:40 AM, Christoph Gohlke wrote:



 On 3/12/2011 1:08 AM, Nadav Horesh wrote:
 Having numpy, scipy, and matplotlib working reasonably with
 python3, a
 major piece of code I miss for a major python3 migration is an
 image IO.
 I found that pylab's imread works fine for png image, but I need to
 read
 all the other image format as well as png and jpeg output.
 Any hints (including advices how easyly construct my own module) are
 appreciated.
 Nadav.


 On Windows, PIL (private port at
 http://www.lfd.uci.edu/~gohlke/pythonlibs/#pil), PythonMagick
 http://www.imagemagick.org/download/python/, and pygame 1.9.2pre
 http://www.pygame.org are working reasonably well for image IO. Also
 the FreeImage library http://freeimage.sourceforge.net/ is easy to
 use
 with ctypes http://docs.python.org/py3k/library/ctypes.html.

 Christoph
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [OT] any image io module that works with python3?

2011-03-12 Thread Nadav Horesh
It started to work after processing it with 2to3 and omitting the conversion of 
file names with the str function (I supply the file names as bytes).
Issues:
  1. It refuses to save in jpeg format
  2. There is a worning of possible segfult on 64 bit machine (which is the 
target platform).

I'll keep on test it.

 Thank you
Nadav.


From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of Zachary Pincus [zachary.pin...@yale.edu]
Sent: 12 March 2011 14:35
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] [OT] any image io module that works with
python3?

Here's a ctypes interface to FreeImage that I wrote a while back and
was since cleaned up (and maintained) by the scikits.image folk:

https://github.com/stefanv/scikits.image/blob/master/scikits/image/io/_plugins/freeimage_plugin.py

If it doesn't work out of the box on python 3, then it should be
pretty simple to fix.

Zach



On Mar 12, 2011, at 4:40 AM, Christoph Gohlke wrote:



 On 3/12/2011 1:08 AM, Nadav Horesh wrote:
 Having numpy, scipy, and matplotlib working reasonably with
 python3, a
 major piece of code I miss for a major python3 migration is an
 image IO.
 I found that pylab's imread works fine for png image, but I need to
 read
 all the other image format as well as png and jpeg output.
 Any hints (including advices how easyly construct my own module) are
 appreciated.
 Nadav.


 On Windows, PIL (private port at
 http://www.lfd.uci.edu/~gohlke/pythonlibs/#pil), PythonMagick
 http://www.imagemagick.org/download/python/, and pygame 1.9.2pre
 http://www.pygame.org are working reasonably well for image IO. Also
 the FreeImage library http://freeimage.sourceforge.net/ is easy to
 use
 with ctypes http://docs.python.org/py3k/library/ctypes.html.

 Christoph
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [OT] any image io module that works with python3?

2011-03-12 Thread Nadav Horesh
After the  replacement of ö with o, the installation went without errors, but:

nadav@nadav_home ~ $ python3
Python 3.1.3 (r313:86834, Feb 25 2011, 11:08:33) 
[GCC 4.4.4] on linux2
Type help, copyright, credits or license for more information.
 import _imaging
Traceback (most recent call last):
  File stdin, line 1, in module
ImportError: /usr/lib64/python3.1/site-packages/PIL/_imaging.so: undefined 
symbol: Py_FindMethod

 Thank you,

Nadav.

From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of Christoph Gohlke [cgoh...@uci.edu]
Sent: 12 March 2011 21:49
To: numpy-discussion@scipy.org
Subject: Re: [Numpy-discussion] [OT] any image io module that   works   with
python3?

On 3/12/2011 8:45 AM, Nadav Horesh wrote:
 I forgot to mention that I work on linux (gentoo x86-64).Here are my 
 achievements till now:

 1. PythonMagick: Needs boost which I do not have it avaiable on python3
Boost works on Python 3.1. You might need to compile it.
 2. Pygame: I have the stable version(1.9.1) should it work?
You need the developer version from svn.
 3. FreeImage: I installed FreeImagePy on python3, but it doesn't work yet.
FreeImagePy is unmaintained, does not work on Python 3, and has problems
on 64 bit platforms. Just wrap the functions you need in ctypes.
 4. PIL: I patched setup.py and map.c so python3 setup.py build is working, 
 but:
Try replace Hans Häggström with Hans Haggstrom in PIL/WalImageFile.py

Christoph


 nadav@nadav_home /dev/shm/PIL-1.1.7-py3 $ sudo python3.1  setup.py install
 /usr/lib64/python3.1/distutils/dist.py:259: UserWarning: Unknown distribution 
 option: 'ext_comp_args'
warnings.warn(msg)
 running install
 running build
 running build_py
 running build_ext
 
 PIL 1.1.7 SETUP SUMMARY
 
 version   1.1.7
 platform  linux2 3.1.3 (r313:86834, Feb 25 2011, 11:08:33)
[GCC 4.4.4]
 
 --- TKINTER support available
 --- JPEG support available
 --- ZLIB (PNG/ZIP) support available
 --- FREETYPE2 support available
 --- LITTLECMS support available

 .
 .
 .

 byte-compiling /usr/lib64/python3.1/site-packages/PIL/WalImageFile.py to 
 WalImageFile.pyc
 Traceback (most recent call last):
File setup.py, line 520, inmodule
  setup(*(), **configuration)  # old school :-)
File /usr/lib64/python3.1/distutils/core.py, line 149, in setup
  dist.run_commands()
File /usr/lib64/python3.1/distutils/dist.py, line 919, in run_commands
  self.run_command(cmd)
File /usr/lib64/python3.1/distutils/dist.py, line 938, in run_command
  cmd_obj.run()
File /usr/lib64/python3.1/distutils/command/install.py, line 592, in run
  self.run_command(cmd_name)
File /usr/lib64/python3.1/distutils/cmd.py, line 315, in run_command
  self.distribution.run_command(command)
File /usr/lib64/python3.1/distutils/dist.py, line 938, in run_command
  cmd_obj.run()
File /usr/lib64/python3.1/distutils/command/install_lib.py, line 98, in 
 run
  self.byte_compile(outfiles)
File /usr/lib64/python3.1/distutils/command/install_lib.py, line 135, in 
 byte_compile
  dry_run=self.dry_run)
File /usr/lib64/python3.1/distutils/util.py, line 560, in byte_compile
  compile(file, cfile, dfile)
File /usr/lib64/python3.1/py_compile.py, line 137, in compile
  codestring = f.read()
File /usr/lib64/python3.1/codecs.py, line 300, in decode
  (result, consumed) = self._buffer_decode(data, self.errors, final)
 UnicodeDecodeError: 'utf8' codec can't decode byte 0xe4 in position 1909: 
 invalid continuation byte



 Any idea on how to correct it? Any elegant way to avoid byte compiling?

Nadav


 
 From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
 On Behalf Of Zachary Pincus [zachary.pin...@yale.edu]
 Sent: 12 March 2011 14:35
 To: Discussion of Numerical Python
 Subject: Re: [Numpy-discussion] [OT] any image io module that works with  
   python3?

 Here's a ctypes interface to FreeImage that I wrote a while back and
 was since cleaned up (and maintained) by the scikits.image folk:

 https://github.com/stefanv/scikits.image/blob/master/scikits/image/io/_plugins/freeimage_plugin.py

 If it doesn't work out of the box on python 3, then it should be
 pretty simple to fix.

 Zach



 On Mar 12, 2011, at 4:40 AM, Christoph Gohlke wrote:



 On 3/12/2011 1:08 AM, Nadav Horesh wrote:
 Having numpy, scipy, and matplotlib working reasonably with
 python3, a
 major piece of code I miss for a major python3 migration is an
 image IO.
 I found that pylab's imread works fine for png image, but I need to
 read
 all the other image format as well as png and jpeg output.
 Any hints (including advices how

Re: [Numpy-discussion] [OT] any image io module that works with python3?

2011-03-12 Thread Nadav Horesh
This lead to another error probably due to line 68 in map.h. As much as I could 
trace it, ob_type is a member of PyObject,  not of PyTypeObject. I have no clue 
how to resolve this.

Nadav.

From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of Christoph Gohlke [cgoh...@uci.edu]
Sent: 13 March 2011 00:37
To: numpy-discussion@scipy.org
Subject: Re: [Numpy-discussion] [OT] any image io   module  thatworks   
withpython3?

On 3/12/2011 12:47 PM, Nadav Horesh wrote:
 After the  replacement of ö with o, the installation went without errors, but:

 nadav@nadav_home ~ $ python3
 Python 3.1.3 (r313:86834, Feb 25 2011, 11:08:33)
 [GCC 4.4.4] on linux2
 Type help, copyright, credits or license for more information.
 import _imaging
 Traceback (most recent call last):
File stdin, line 1, inmodule
 ImportError: /usr/lib64/python3.1/site-packages/PIL/_imaging.so: undefined 
 symbol: Py_FindMethod

Py_FindMethod should be excluded by `#ifndef PY3` or similar
preprocessor statements. There is a typo in map.c line 65: change
`#ifdef PY3` to `#ifndef PY3` and clean your build directory before
rebuilding.

Christoph


   Thank you,

  Nadav.
 
 From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
 On Behalf Of Christoph Gohlke [cgoh...@uci.edu]
 Sent: 12 March 2011 21:49
 To: numpy-discussion@scipy.org
 Subject: Re: [Numpy-discussion] [OT] any image io module that   works   with  
   python3?

 On 3/12/2011 8:45 AM, Nadav Horesh wrote:
 I forgot to mention that I work on linux (gentoo x86-64).Here are my 
 achievements till now:

 1. PythonMagick: Needs boost which I do not have it avaiable on python3
 Boost works on Python 3.1. You might need to compile it.
 2. Pygame: I have the stable version(1.9.1) should it work?
 You need the developer version from svn.
 3. FreeImage: I installed FreeImagePy on python3, but it doesn't work yet.
 FreeImagePy is unmaintained, does not work on Python 3, and has problems
 on 64 bit platforms. Just wrap the functions you need in ctypes.
 4. PIL: I patched setup.py and map.c so python3 setup.py build is working, 
 but:
 Try replace Hans Häggström with Hans Haggstrom in PIL/WalImageFile.py

 Christoph


 nadav@nadav_home /dev/shm/PIL-1.1.7-py3 $ sudo python3.1  setup.py install
 /usr/lib64/python3.1/distutils/dist.py:259: UserWarning: Unknown 
 distribution option: 'ext_comp_args'
 warnings.warn(msg)
 running install
 running build
 running build_py
 running build_ext
 
 PIL 1.1.7 SETUP SUMMARY
 
 version   1.1.7
 platform  linux2 3.1.3 (r313:86834, Feb 25 2011, 11:08:33)
 [GCC 4.4.4]
 
 --- TKINTER support available
 --- JPEG support available
 --- ZLIB (PNG/ZIP) support available
 --- FREETYPE2 support available
 --- LITTLECMS support available

 .
 .
 .

 byte-compiling /usr/lib64/python3.1/site-packages/PIL/WalImageFile.py to 
 WalImageFile.pyc
 Traceback (most recent call last):
 File setup.py, line 520, inmodule
   setup(*(), **configuration)  # old school :-)
 File /usr/lib64/python3.1/distutils/core.py, line 149, in setup
   dist.run_commands()
 File /usr/lib64/python3.1/distutils/dist.py, line 919, in run_commands
   self.run_command(cmd)
 File /usr/lib64/python3.1/distutils/dist.py, line 938, in run_command
   cmd_obj.run()
 File /usr/lib64/python3.1/distutils/command/install.py, line 592, in 
 run
   self.run_command(cmd_name)
 File /usr/lib64/python3.1/distutils/cmd.py, line 315, in run_command
   self.distribution.run_command(command)
 File /usr/lib64/python3.1/distutils/dist.py, line 938, in run_command
   cmd_obj.run()
 File /usr/lib64/python3.1/distutils/command/install_lib.py, line 98, 
 in run
   self.byte_compile(outfiles)
 File /usr/lib64/python3.1/distutils/command/install_lib.py, line 135, 
 in byte_compile
   dry_run=self.dry_run)
 File /usr/lib64/python3.1/distutils/util.py, line 560, in byte_compile
   compile(file, cfile, dfile)
 File /usr/lib64/python3.1/py_compile.py, line 137, in compile
   codestring = f.read()
 File /usr/lib64/python3.1/codecs.py, line 300, in decode
   (result, consumed) = self._buffer_decode(data, self.errors, final)
 UnicodeDecodeError: 'utf8' codec can't decode byte 0xe4 in position 1909: 
 invalid continuation byte



 Any idea on how to correct it? Any elegant way to avoid byte compiling?

 Nadav


 
 From: numpy-discussion-boun...@scipy.org 
 [numpy-discussion-boun...@scipy.org] On Behalf Of Zachary Pincus 
 [zachary.pin...@yale.edu]
 Sent: 12 March 2011 14:35
 To: Discussion of Numerical Python
 Subject

Re: [Numpy-discussion] Error in tanh for large complex argument

2011-01-28 Thread Nadav Horesh
A brief history:
I wrote the asinh and acosh functions for the math (or was it cmath?) for 
python 2.0. It fixed some problems of GVR implementation, but still it was far 
from perfect, and replaced shortly after.  My 1/4 cent tip: Do not rush --- 
find a good code.

  Nadav


From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of Mark Bakker [mark...@gmail.com]
Sent: 28 January 2011 12:45
To: numpy-discussion@scipy.org
Subject: Re: [Numpy-discussion] Error in tanh for large complex argument


Good point, so we need a better solution that fixes all cases

 I'll file a ticket.

 Incidentally, if tanh(z) is simply programmed as

 (1.0 - exp(-2.0*z)) / (1.0 + exp(-2.0*z))

This will overflow as z - -\infty.
 The solution is probably to use a
different expression for Re(z)  0, and to check how other libraries do
this in case the above still misses something.

 Pauli
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Error in tanh for large complex argument

2011-01-27 Thread Nadav Horesh
The C code return the right result with glibc 2.12.2 (linux 64 + gcc 4.52). 
However I get the same nan+nan*j with python.

  Nadav

From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of Pauli Virtanen [p...@iki.fi]
Sent: 27 January 2011 13:11
To: numpy-discussion@scipy.org
Subject: Re: [Numpy-discussion] Error in tanh for large complex argument

Thu, 27 Jan 2011 11:40:00 +0100, Mark Bakker wrote:
[clip]
 Not for large complex values:

 In [85]: tanh(1000+0j)
 Out[85]: (nan+nan*j)

Yep, it's a bug. Care to file a ticket?

The implementation is just sinh/cosh, which overflows.
The fix is to provide an asymptotic expansion (sgn Re z),
although around the imaginary axis the switch is perhaps
somewhat messy to handle.

OTOH, the glibc-provided C99 function doesn't fare too well either:

#include math.h
#include complex.h
#include stdio.h

int main()
{
complex double z = 1000;
double x, y;
z = ctanh(z); x = creal(z); y = cimag(z);
printf(%g %g\n, x, y);
return 0;
}

### - Prints 0 0  on glibc 2.12.1

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to import input data to make ndarray for batch processing?

2010-11-18 Thread Nadav Horesh
Do you want to save the file to disk as 100x100 matrices, or just to read them 
into the memory?
Are the files in ascii or binary format?

  Nadav

From: numpy-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] 
On Behalf Of Venkat [dvr...@gmail.com]
Sent: 18 November 2010 16:49
To: Discussion of Numerical Python
Subject: [Numpy-discussion] How to import input data to make ndarray for
batch processing?

Hi All,
I am new to Numpy (also Scipy).

I am trying to reshape my text data which is in one single column (10,000 rows).
I want the data to be in 100x100 array form.

I have many files to convert like this. All of them are having file names like 
0, 1, 2, 500. with out any extension.
Actually, I renamed actual files so that I can import them in Matlab for batch 
processing.
Since Matlab also new for me, I thought I will try Numpy first.

Can any body help me in writing the script to do this for making batch 
processing.

Thanks in advance,
Venkat
--
***
D.Venkat
Research Scholar
Dept of Physics
IISc, Bangalore
India-560 012

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Precision difference between dot and sum

2010-11-02 Thread Nadav Horesh
... Also, IIRC, 1.0 cannot be represented exactly as a float,

 Not true

   Nadav


-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Matthieu Brucher
Sent: Tue 02-Nov-10 11:05
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] Precision difference between dot and sum
 
 It would be great if someone could let me know why this happens.

 They don't use the same implementation, so such tiny differences are
 expected - having exactly the same solution would have been surprising,
 actually. You may be surprised about the difference for such a trivial
 operation, but keep in mind that dot is implemented with highly
 optimized CPU instructions (that is if you use ATLAS or similar library).

Also, IIRC, 1.0 cannot be represented exactly as a float, so the dot
way may be more wrong than the sum way.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [numpy-discussion] Transform 3d data

2010-10-19 Thread Nadav Horesh
You can aid mgrid, riughy as the follows (I may have mistakes, but the 
direction should be clear):

def transform_3d_data_(field,lwrbnd,uprbnd):
  shape = field.shape
  XYZ = np.mgrid[lwrbnd[0]:uprbnd[0]:shape[0], lwrbnd[1]:uprbnd[1]:shape[1], 
lwrbnd[2]:uprbnd[2]:shape[2]]
  vectors = fields.reshape(-1,3)
  np.savetxt(np.hstack((XYZ.reshape(3,-1).T, vectors)))


  Nadav
  
 

-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Thomas Königstein
Sent: Tue 19-Oct-10 12:05
To: numpy-discussion@scipy.org
Subject: [Numpy-discussion] [numpy-discussion] Transform 3d data
 
Hello everyone,

I have the following problem:

I acquire a (evenly spaced) 3d field of 3d vectors from a HDF5 data file:

 import tables
 field=tables.openFile(test.h5).root.YeeMagField.read()

now, the data is organized in nested arrays... so, when I have, say, 300
data points on the x-axis, 200 data points on the y-axis and 100 data points
on the z-axis, I get an array with the shape

 field.shape
 (300, 200, 100, 3)

When I now want to see a 3D arrow-plot of this field, I use:

 from enthought.mayavi import mlab as m
 x,y,z=field.transpose()
 m.quiver3d(x,y,z)

and this works just fine. Here, the arrays (x and y and z) *each* contain
one field component (i,e. into one spatial direction) at 300x200x100 points
in a 3D array.

Now, I would like to have this data in another format, so I can for example
save it to a textfile with pylab.savetxt. What I would like are six arrays,
each 1d, three for the coordinates and three for the field components. Since
I didn't know any better, I wrote the following procedure:

def transform_3d_data_(field,lowerBounds,upperBounds): #field is the same as
above, lowerBounds and upperBounds each contain three values for x,y,z
min/max
import pylab as p
xx,yy,zz,ex,ey,ez=list(),list(),list(),list(),list(),list()   #xx,yy,zz
will become the spatial coordinates, ex,ey,ez will become the field
components
for xi in range(field.shape[0]): #for each x coordinate...
for yi in range(field.shape[1]): #for each y coordinate...
for zi in range(field.shape[2]): #for each z coordinate...

 
xx.append(lowerBounds[0]+xi*(upperBounds[0]-lowerBounds[0])/float(field.shape[0]))
#append this

 
yy.append(lowerBounds[1]+yi*(upperBounds[1]-lowerBounds[1])/float(field.shape[1]))
#x, y, z coordinate

 
zz.append(lowerBounds[2]+zi*(upperBounds[2]-lowerBounds[2])/float(field.shape[2]))
#to xx, yy, zz 
ex.append(field[xi][yi][zi][0]) #and also
ey.append(field[xi][yi][zi][1]) #add this field composition
ez.append(field[xi][yi][zi][2]) #to ex, ey, ez.
xx,yy,zz,ex,ey,ez=[p.array(_) for _ in [xx,yy,zz,ex,ey,ez]]
return xx,yy,zz,ex,ey,ez

, so I get the desired six 1D-arrays xx,yy,zz for the coordinates and
ex,ey,ez for the field components. It works.

Now my question: there has to be a better way to get this re-organization,
right? This one here is much too slow, obviously. Is there maybe a single
command for pylab that does this?

Thanks in advance, cheers

Thomas

PS. I'm new to this messaging board, and I was wondering if there is a
normal forum as well? I can't even search through the archives at
http://mail.scipy.org/pipermail/numpy-discussion/ :( have there ever been
discussions/initiatives about porting the mailing list archives for example
to a phpBB based forum?

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [numpy-discussion] Transform 3d data

2010-10-19 Thread Nadav Horesh
Of course there is an (at least one) error:
the line should be:

XYZ =
np.mgrid[lwrbnd[0]:uprbnd[0]:shape[0]*1j,lwrbnd[1]:uprbnd[1]:shape[1]*1j, 
lwrbnd[2]:uprbnd[2]:shape[2]*1j]


On Tue, 2010-10-19 at 14:10 +0200, Nadav Horesh wrote:
 You can aid mgrid, riughy as the follows (I may have mistakes, but the
 direction should be clear):
 
 def transform_3d_data_(field,lwrbnd,uprbnd): 
   shape = field.shape 
   XYZ = np.mgrid[lwrbnd[0]:uprbnd[0]:shape[0],
 lwrbnd[1]:uprbnd[1]:shape[1], lwrbnd[2]:uprbnd[2]:shape[2]] 
   vectors = fields.reshape(-1,3) 
   np.savetxt(np.hstack((XYZ.reshape(3,-1).T, vectors)))
 
 
   Nadav 
   
  
 
 -Original Message- 
 From: numpy-discussion-boun...@scipy.org on behalf of Thomas
 Königstein 
 Sent: Tue 19-Oct-10 12:05 
 To: numpy-discussion@scipy.org 
 Subject: [Numpy-discussion] [numpy-discussion] Transform 3d data 
   
 Hello everyone,
 
 I have the following problem:
 
 I acquire a (evenly spaced) 3d field of 3d vectors from a HDF5 data
 file:
 
  import tables 
  field=tables.openFile(test.h5).root.YeeMagField.read()
 
 now, the data is organized in nested arrays... so, when I have, say,
 300 
 data points on the x-axis, 200 data points on the y-axis and 100 data
 points 
 on the z-axis, I get an array with the shape
 
  field.shape 
  (300, 200, 100, 3)
 
 When I now want to see a 3D arrow-plot of this field, I use:
 
  from enthought.mayavi import mlab as m 
  x,y,z=field.transpose() 
  m.quiver3d(x,y,z)
 
 and this works just fine. Here, the arrays (x and y and z) *each*
 contain 
 one field component (i,e. into one spatial direction) at 300x200x100
 points 
 in a 3D array.
 
 Now, I would like to have this data in another format, so I can for
 example 
 save it to a textfile with pylab.savetxt. What I would like are six
 arrays, 
 each 1d, three for the coordinates and three for the field components.
 Since 
 I didn't know any better, I wrote the following procedure:
 
 def transform_3d_data_(field,lowerBounds,upperBounds): #field is the
 same as 
 above, lowerBounds and upperBounds each contain three values for
 x,y,z 
 min/max 
 import pylab as p 
 xx,yy,zz,ex,ey,ez=list(),list(),list(),list(),list(),list()
 #xx,yy,zz 
 will become the spatial coordinates, ex,ey,ez will become the field 
 components 
 for xi in range(field.shape[0]): #for each x coordinate... 
 for yi in range(field.shape[1]): #for each y coordinate... 
 for zi in range(field.shape[2]): #for each z coordinate...
 
  
 xx.append(lowerBounds[0]+xi*(upperBounds[0]-lowerBounds[0])/float(field.shape[0]))
  
 #append this
 
  
 yy.append(lowerBounds[1]+yi*(upperBounds[1]-lowerBounds[1])/float(field.shape[1]))
  
 #x, y, z coordinate
 
  
 zz.append(lowerBounds[2]+zi*(upperBounds[2]-lowerBounds[2])/float(field.shape[2]))
  
 #to xx, yy, zz  
 ex.append(field[xi][yi][zi][0]) #and also 
 ey.append(field[xi][yi][zi][1]) #add this field
 composition 
 ez.append(field[xi][yi][zi][2]) #to ex, ey, ez. 
 xx,yy,zz,ex,ey,ez=[p.array(_) for _ in [xx,yy,zz,ex,ey,ez]] 
 return xx,yy,zz,ex,ey,ez
 
 , so I get the desired six 1D-arrays xx,yy,zz for the coordinates and 
 ex,ey,ez for the field components. It works.
 
 Now my question: there has to be a better way to get this
 re-organization, 
 right? This one here is much too slow, obviously. Is there maybe a
 single 
 command for pylab that does this?
 
 Thanks in advance, cheers
 
 Thomas
 
 PS. I'm new to this messaging board, and I was wondering if there is
 a 
 normal forum as well? I can't even search through the archives at 
 http://mail.scipy.org/pipermail/numpy-discussion/ :( have there ever
 been 
 discussions/initiatives about porting the mailing list archives for
 example 
 to a phpBB based forum?
 
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Meshgrid with Huge Arrays

2010-10-07 Thread Nadav Horesh
You should avoid meshgrid, as the follows:

...

#3D ARRAY
XArray = np.arange(0, NrHorPixels, 1./sqrt(NrCellsPerPixel))
YArray = np.arange(0, NrVerPixels, 1./sqrt(NrCellsPerPixel))
Z = 
Amplitude*exp(-(((XArray-GaussianCenterX)**2/(2*SigmaX**2))+((YArray[:,None]-GaussianCenterY)**/(2*SigmaY**2
#  Note this  ^
# |

   Nadav


-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of sicre
Sent: Thu 07-Oct-10 06:19
To: numpy-discussion@scipy.org
Subject: [Numpy-discussion]  Meshgrid with Huge Arrays
 

I do not have good programming skills, I am trying to create a 2048x2048 3D
pixel array where each pixel has 100 cells.
I am using meshgrid in order to create a 3D array, then adding a gaussian
function to the entire array centered in (1024,1024). I am having three
types of errors:

#FIRST ERROR:
File /home/sicre/PHASES/Examples/test.py, line 18, in module
XArray, YArray = np.meshgrid(XArray, YArray)
  File /usr/lib/python2.6/dist-packages/numpy/lib/function_base.py, line
2931, in meshgrid
X = x.repeat(numRows, axis=0)
ValueError: dimensions too large.

#SECOND ERROR (choosing instead NrCellsPerPixel=36)
Traceback (most recent call last):
  File /home/sicre/PHASES/Examples/test.py, line 19, in module
Z = np.zeros([len(XArray),len(YArray)])
MemoryError

from pylab import *
import numpy as np

#VARIABLES
NrHorPixels=2048
NrVerPixels=2048
NrCellsPerPixel=100
GaussianCenterX=1024
GaussianCenterY=1024
SigmaX=1
SigmaY=1
Amplitude = 150


#Z = rand(len(XArray),len(YArray))

#Plot
#pcolormesh(Z)
#colorbar()

For sure there are better solutions for what i am trying to do. Can anyone
figure it out? I would appreciate it very much, i've been searching a
solution/answer for days.
-- 
View this message in context: 
http://old.nabble.com/Meshgrid-with-Huge-Arrays-tp29902859p29902859.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Viewer for 2D Numpy arrays (GUI)

2010-09-17 Thread Nadav Horesh
View 2D arrays:
 Most convenient: Matplotlib (imshow)
 As surface plot: Mayavi (Matplotlib has surface plots, but it is slow for 
large arrays)

Modify files:
 I think the IDE spyder could help (and you can use mayavi/matplotlib within)

  Nadav


-הודעה מקורית-
מאת: numpy-discussion-boun...@scipy.org בשם Mayank P Jain
נשלח: ו 17-ספטמבר-10 08:16
אל: numpy-discussion
נושא: [Numpy-discussion] Viewer for 2D Numpy arrays (GUI)
 
 Currently I am exporting them to csv files, but I wonder if there is a
viewer that can be used with native numpy array files to view and preferably
modify the 2D arrays.
Any help would be appreciated.


Regards
Mayank P Jain

V R TechNiche
Transportation Modeler/Planner
Phone: +91 901 356 0583 begin_of_the_skype_highlighting  +91 901 
356 0583  end_of_the_skype_highlighting

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] A bug in boolean indexing?

2010-07-29 Thread Nadav Horesh
The following does not raise an error:

a = np.arange(5)
a[a0] = a

although a.shape == (5,) while a[a0].shape == (4,)

I get in on python2.6.5, numpy 1.4.1 on win32, and python 2.6.5, numpy 
2.0.0.dev8469 on linux64.

  Nadav.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A bug in boolean indexing?

2010-07-29 Thread Nadav Horesh
I was not aware that

a[b] = c

where b is an integer or boolean indexing array, is legal even if

a[b].shape != c.shape.

  Nadav.


-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Alan G Isaac
Sent: Thu 29-Jul-10 14:57
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] A bug in boolean indexing?
 
On 7/29/2010 4:04 AM, Nadav Horesh wrote:
 a = np.arange(5)
 a[a0] = a


This has nothing to do with reusing ``a``::

  b = np.arange(50)
  a[a0] = b
  a
 array([0, 0, 1, 2, 3])

Note however that reusing ``a`` is unsafe.
(You will get all zeros.)

fwiw,
Alan Isaac

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy 1.5 or 2.0

2010-07-19 Thread Nadav Horesh
Till now I see that  numpy2 plays well with PIL, Matplotlib, scipy and maybe 
some other packages. Should I expect that it might break?


  Nadav.


-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Pauli Virtanen
Sent: Mon 19-Jul-10 10:54
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] numpy 1.5 or 2.0
 
 What is the difference between these two versions? I usually check out
 the svn version (now 2.0) and it compiles well with python 2.6, 2.7 and
 3.1.

Binary compatibility with previous versions.
Moreover, 2.0 will likely contain a refactored core.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy 1.5 or 2.0

2010-07-18 Thread Nadav Horesh

What is the difference between these two versions? I usually check out the svn 
version (now 2.0) and it compiles well with python 2.6, 2.7 and 3.1.


   Nadav.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Possible bug: uint64 + int gives float64

2010-06-13 Thread Nadav Horesh
int can be larger than numpy.int64 therefore it should be coerced to float64 
(or float96/float128)

  Nadav

-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Pearu Peterson
Sent: Sun 13-Jun-10 12:08
To: Discussion of Numerical Python
Subject: [Numpy-discussion] Possible bug: uint64 + int gives float64
 
Hi,
I just noticed some weird behavior in operations with uint64 and int,
heres an example:

 numpy.uint64(3)+1
4.0
 type(numpy.uint64(3)+1)
type 'numpy.float64'

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Reading 12bits numbers ?

2010-06-08 Thread Nadav Horesh
You can. If each number occupies 2 bytes (16 bits) it is straight forward. If 
it is a continues 12 bits stream you have to unpack by your self:
data = np.fromstring(str12bits, dtype=np.uint8)
data1 = data.astype(no.uint16)
data1[::3] = data1[::3]*256 + data1[1::3] // 16
data1[1::3] = (data[1::3]  0x0f)*16 + data[2::3]

If you have even number of 12 bits you can continue as the follows:

result = np.ravel(data1.reshape(-1,3)[:,:2])

I might have mistakes, but I hope you grasped the idea.

  Nadav



-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Martin Raspaud
Sent: Tue 08-Jun-10 15:04
To: Discussion of Numerical Python
Subject: [Numpy-discussion] Reading 12bits numbers ?
 
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

Is it possible to read an array of 12bit encoded numbers from file (or string)
using numpy ?



Thanks,
Martin
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJMDjG6AAoJEBdvyODiyJI4ksQH/01OMIm59V3XDpcWv6oYTSBw
zFZ/Q7mtyvHhTC9LQAgBWsIrdVze2qZP8Azsv73VjHx8QggTI8Z++U7v1HuHNyhs
CAT7DsSLYKcNC4sZ2tCkMNfTQZ8Xm0hTxObylr+V98LcPO+CSjRyERZSA0S3+X6A
xPZlRKLNErIGqMWiyr25r7wjuYPTK8iICqYdzZI33w7eZPcMtvP40GNDUaG7aOno
mcMwSzPHnKHCuPlfj3p2rCkDs5OEhmEP9fobVIhR0Y7LxusrewPuTlwL1M+e/tqe
Uf0Drjymo9i3d0VqCKAKBwd9d0kJPzVCbbwQnynu87cOj9CjwhiZ4lFufc+S+m4=
=CrwA
-END PGP SIGNATURE-

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Can not compile numpy with python2.7 onllinux

2010-05-24 Thread Nadav Horesh

That it, you just have to add the missing #endif after
   m = Py_InitModule .


 Thank you,

   Nadav.

-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Charles R Harris
Sent: Sun 23-May-10 20:48
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] Can not compile numpy with python2.7 onllinux
 
On Sun, May 23, 2010 at 1:40 AM, Nadav Horesh nad...@visionsense.comwrote:


 I think that line 3405 in _capi.c (svn 8386)
 should be:

 #if PY_VERSION_HEX = 0x0301


 (At least it looks reasonable considering line 3375, and it solves my
 problem)


Does the following work? PyCObject is deprecated in 2.7.

#if PY_VERSION_HEX = 0x0301
m = PyModule_Create(moduledef);
#else
m = Py_InitModule(_capi, _libnumarrayMethods);

#if PY_VERSION_HEX = 0x0207
c_api_object = PyCapsule_New((void *)libnumarray_API, NULL, NULL);
if (c_api_object == NULL) {
PyErr_Clear();
}
#else
c_api_object = PyCObject_FromVoidPtr((void *)libnumarray_API, NULL);
#endif

Chuck

  Nadav
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] calling C function from Python via f2py

2010-05-24 Thread Nadav Horesh

Sorry, can not figure it out, if you don't gen an answer on this list maybe you 
should address it on swig list. Personally I use cython for this purpose.

   Nadav

-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Matt Fearon
Sent: Mon 24-May-10 15:43
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] calling C function from Python via f2py
 
Nadav,

Thank you. I believe it is working now, as the pos(2) error is gone.
However, though the error is gone, my return variable from the C
function is not being updated as if the C code is not executing?
Syntax to the call the C function from Python is the following:

FFMCcalc.FFMCcalc(T,H,W,ro,Fo)

Should this execute the C code?

thanks,
Matt


On Sun, May 23, 2010 at 1:44 AM, Nadav Horesh nad...@visionsense.com wrote:

 in test.py change to

 print FFMCcalc.FFMCcalc(T,H,W,ro,Fo)

 As implied from the line

 print FFMCcalc.FFMCcalc.__doc__

  Nadav

 -Original Message-
 From: numpy-discussion-boun...@scipy.org on behalf of Matt Fearon
 Sent: Fri 21-May-10 21:55
 To: numpy-discussion@scipy.org
 Subject: [Numpy-discussion] calling C function from Python via f2py

 Hello,

 I am trying to use f2py to generate a wrapped C function that I can
 call from Python (passing arguments to and from). I have this almost
 working, but I receive trouble with exp and pow related to C and
 some pos (2) error with one of my passed variables. My f2py syntax
 is:

 f2py -c -lm FFMCcalc.pyf FFMCcalc.c

 Also, my 3 scripts are short and attached.

 1. FFMCcalc.c, C function
 2. FFMCcalc.pyf, wrapper file
 3. test.py, short python code that calls C function

 Any advice would greatly appreciated to get this working.
 thanks,
 Matt


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] loadtxt raises an exception on empty file

2010-05-24 Thread Nadav Horesh
You can just catch the exception and decide what to do with it:

try:
   data = np.loadtxt('foo.txt')
except IOError:
   data = 0  # Or something similar

  Nadav

-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Maria Liukis
Sent: Tue 25-May-10 01:14
To: numpy-discussion@scipy.org
Subject: [Numpy-discussion] loadtxt raises an exception on empty file
 
Hello everybody,

I'm using numpy V1.3.0 and ran into a case when numpy.loadtxt('foo.txt') raised 
an exception:

import numpy as np
np.loadtxt('foo.txt')
Traceback (most recent call last):
  File stdin, line 1, in module
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/lib/io.py,
 line 456, in loadtxt
raise IOError('End-of-file reached before encountering data.')
IOError: End-of-file reached before encountering data.


if provided file 'foo.txt' is empty.

Would anybody happen to know if it's a feature or a bug? I would expect it to 
return an empty array. 

numpy.fromfile() handles empty text files:

 np.fromfile('foo.txt', sep='\t\n ')
array([], dtype=float64)


Would anybody suggest a graceful way of handling empty files with 
numpy.loadtxt() (except for catching an IOError exception)?

Many thanks,
Masha

liu...@usc.edu



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] calling C function from Python via f2py

2010-05-23 Thread Nadav Horesh

in test.py change to

print FFMCcalc.FFMCcalc(T,H,W,ro,Fo)

As implied from the line

print FFMCcalc.FFMCcalc.__doc__

 Nadav

-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Matt Fearon
Sent: Fri 21-May-10 21:55
To: numpy-discussion@scipy.org
Subject: [Numpy-discussion] calling C function from Python via f2py
 
Hello,

I am trying to use f2py to generate a wrapped C function that I can
call from Python (passing arguments to and from). I have this almost
working, but I receive trouble with exp and pow related to C and
some pos (2) error with one of my passed variables. My f2py syntax
is:

f2py -c -lm FFMCcalc.pyf FFMCcalc.c

Also, my 3 scripts are short and attached.

1. FFMCcalc.c, C function
2. FFMCcalc.pyf, wrapper file
3. test.py, short python code that calls C function

Any advice would greatly appreciated to get this working.
thanks,
Matt

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Saving an array on disk to free memory - Pickling

2010-05-17 Thread Nadav Horesh

Is a memory mapped file is a viable solution to your problem?

   Nadav

-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Jean-Baptiste Rudant
Sent: Mon 17-May-10 14:03
To: Numpy Discussion
Subject: [Numpy-discussion] Saving an array on disk to free memory - Pickling
 
Hello,

I tried to create an object :
- which behave just like a numpy array ;
- which can be saved on disk in an efficient way (numpy.save in my example but 
with pytables in my real program) ;
- which can be unloaded (if it is saved) to free memory : it can exsit has an 
empty stuff which knows how to retrieve real values ; it will be loaded only 
when we need to work with it ;
- which unloads itself before being pickled (values are already saved and don't 
have to be pickled).

It can't, at least I think so, inherit from ndarray because sometimes (for 
example juste after being unpickled and before being used) it is juste an empty 
shell.
I don't think memmap can be helpful (I want to use pytables to save it on disk 
and I want it to be flexible : if I use it in a temporary way, I just need it 
in memory and I will never save it on disk).

My problems are :
- this code is ugly ;
- I have to define explicitely all special methods (__add__, __mul__...) of 
ndarrays because:
 * __getattr__ don't retrieve them ;
 * even if it does, I have to define explicitely the type of the return value 
(if I well understand, if it inherits from ndarray __array_wrap__ do all the 
stuff).

Thank you for the help.

Regards.

import numpy

import numpy

class PersistentArray(object):
def __init__(self, values):
'''
values is a numpy array
'''
self.values = values
self.filename = None
self.is_loaded = True
self.is_saved = False

def save(self, filename):
self.filename = filename
numpy.save(self.filename, self.values)
self.is_saved = True

def load(self):
self.values = numpy.load(self.filename)
self.is_loaded = True

def unload(self):
if not self.is_saved:
raise Exception, PersistentArray must be saved before being 
unloaded
del self.values
self.is_loaded = False

def __getitem__(self, index):
return self.values[index]

def __getattr__(self, key):
if key == 'values':
if not self.is_loaded:
self.load()
return self.values
elif key == '__array_interface__':
#I can't remember why I wrote this code, but I think it's necessary 
to make pickling work properly
raise AttributeError, key
else:
try:
#to emulate ndarray inheritance
return self.values.__getattribute__(key)
except AttributeError:
raise AttributeError, key

def __setstate__(self, dict):
self.__dict__.update(dict)
if self.is_loaded and self.is_saved:
self.load()

def __getstate__(self):
if not self.is_saved:
raise Exception, persistent array must be saved before being 
pickled
odict = self.__dict__.copy()
if self.is_saved:
if self.is_loaded:
odict['is_loaded'] = False
del odict['values']
return odict

filename = 'persistent_test.npy'

a = PersistentArray(numpy.arange(10e6))
a.save(filename)
a.sum()
a.unload() # a still exists, knows how to retrieve values if needed, but don't 
use space in memory


  

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] savetxt not working with python3.1

2010-05-13 Thread Nadav Horesh

in module npyio.py lines 794,796 file should be replaced by _file

 Nadav
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Need faster equivalent to digitize

2010-04-15 Thread Nadav Horesh

import numpy as N
N.repeat(N.arange(len(a)), a)

  Nadav

-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Peter Shinners
Sent: Thu 15-Apr-10 08:30
To: Discussion of Numerical Python
Subject: [Numpy-discussion] Need faster equivalent to digitize
 
I am using digitize to create a list of indices. This is giving me 
exactly what I want, but it's terribly slow. Digitize is obviously not 
the tool I want for this case, but what numpy alternative do I have?

I have an array like np.array((4, 3, 3)). I need to create an index 
array with each index repeated by the its value: np.array((0, 0, 0, 0, 
1, 1, 1, 2, 2, 2)).

  a = np.array((4, 3, 3))
  b = np.arange(np.sum(a))
  c = np.digitize(b, a)
  print c
[0 0 0 0 1 1 1 2 2 2]

On an array where a.size==65536 and sum(a)==65536 this is taking over 6 
seconds to compute. As a comparison, using a Python list solution runs 
in 0.08 seconds. That is plenty fast, but I would guess there is a 
faster Numpy solution that does not require a dynamically growing 
container of PyObjects ?

  a = np.array((4, 3, 3))
  c = []
  for i, v in enumerate(a):
... c.extend([i] * v)


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] look for value, depending to y position

2010-04-14 Thread Nadav Horesh
I assume that you forgot to specify the range between 300 and 400. But anyway 
this piece of code may give you a direction:

--
import numpy as np

ythreshold = np.repeat(np.arange(4,-1,-1), 100) * 20 +190
bin_image = image  ythreshold[:,None]
--

Anyway I advice you to look at image morphology operations in scipy.ndimage

  Nadav


-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of ioannis syntychakis
Sent: Wed 14-Apr-10 10:11
To: Discussion of Numerical Python
Subject: [Numpy-discussion] look for value, depending to y position
 
Hallo everybody

maybe somebody can help with the following:

i'm using numpy and pil to find objects in a grayscale image. I make an
array of the image and then i look for pixels with the value above the 230.
Then i convert the array to image and i see my objects.

What i want is to make the grayscale depented to the place on th image.

the image is 500 to 500 pixels.

and for example i want that the pixelvalue the program is looking for
decreases in the y direction.

on position y= 0 to 100 the programm is looking  for pixelvalues above the
250
 on position y= 100 to 200 the programm is looking  for pixelvalues above
the 230
 on position y= 200 to 300 the programm is looking  for pixelvalues above
the 210
 on position y= 400 to 500the programm is looking  for pixelvalues above the
190

is this possible?

thanks in advance.

greetings, Jannis

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Name of the file associated with a memmap

2010-04-12 Thread Nadav Horesh

Is there a way to get the file-name given a memmap array object?


   Nadav
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] rc2 for NumPy 1.4.1 and Scipy 0.7.2

2010-04-12 Thread Nadav Horesh

Tried of install numy-1.4.1-rc2 on python-2.7b1 and got an error:

(64 bit linux on core2, gcc4.4.3)


compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core 
-Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath 
-Inumpy/core/include -I/usr/local/include/python2.7 -c'
gcc: _configtest.c
_configtest.c:1: warning: conflicting types for built-in function ‘exp’
gcc -pthread _configtest.o -o _configtest
_configtest.o: In function `main':
/dev/shm/numpy-1.4.1rc2/_configtest.c:6: undefined reference to `exp'
collect2: ld returned 1 exit status
_configtest.o: In function `main':
/dev/shm/numpy-1.4.1rc2/_configtest.c:6: undefined reference to `exp'
collect2: ld returned 1 exit status
Traceback (most recent call last):
  File setup.py, line 187, in module
setup_package()
  File setup.py, line 180, in setup_package
configuration=configuration )
  File /dev/shm/numpy-1.4.1rc2/numpy/distutils/core.py, line 186, in setup
return old_setup(**new_attr)
  File /usr/local/lib/python2.7/distutils/core.py, line 152, in setup
dist.run_commands()
  File /usr/local/lib/python2.7/distutils/dist.py, line 953, in run_commands
self.run_command(cmd)
  File /usr/local/lib/python2.7/distutils/dist.py, line 972, in run_command
cmd_obj.run()
  File /dev/shm/numpy-1.4.1rc2/numpy/distutils/command/build.py, line 37, in 
run
old_build.run(self)
  File /usr/local/lib/python2.7/distutils/command/build.py, line 127, in run
self.run_command(cmd_name)
  File /usr/local/lib/python2.7/distutils/cmd.py, line 326, in run_command
self.distribution.run_command(command)
  File /usr/local/lib/python2.7/distutils/dist.py, line 972, in run_command
cmd_obj.run()
  File /dev/shm/numpy-1.4.1rc2/numpy/distutils/command/build_src.py, line 
152, in run
self.build_sources()
  File /dev/shm/numpy-1.4.1rc2/numpy/distutils/command/build_src.py, line 
163, in build_sources
self.build_library_sources(*libname_info)
  File /dev/shm/numpy-1.4.1rc2/numpy/distutils/command/build_src.py, line 
298, in build_library_sources
sources = self.generate_sources(sources, (lib_name, build_info))
  File /dev/shm/numpy-1.4.1rc2/numpy/distutils/command/build_src.py, line 
385, in generate_sources
source = func(extension, build_dir)
  File numpy/core/setup.py, line 658, in get_mathlib_info
mlibs = check_mathlib(config_cmd)
  File numpy/core/setup.py, line 328, in check_mathlib
if config_cmd.check_func(exp, libraries=libs, decl=True, call=True):
  File /dev/shm/numpy-1.4.1rc2/numpy/distutils/command/config.py, line 310, 
in check_func
libraries, library_dirs)
  File /usr/local/lib/python2.7/distutils/command/config.py, line 251, in 
try_link
libraries, library_dirs, lang)
  File /dev/shm/numpy-1.4.1rc2/numpy/distutils/command/config.py, line 146, 
in _link
libraries, library_dirs, lang))
  File /dev/shm/numpy-1.4.1rc2/numpy/distutils/command/config.py, line 87, in 
_wrap_method
ret = mth(*((self,)+args))
  File /usr/local/lib/python2.7/distutils/command/config.py, line 148, in 
_link
target_lang=lang)
  File /usr/local/lib/python2.7/distutils/ccompiler.py, line 750, in 
link_executable
debug, extra_preargs, extra_postargs, None, target_lang)
  File /usr/local/lib/python2.7/distutils/unixccompiler.py, line 256, in link
self.spawn(linker + ld_args)
  File /dev/shm/numpy-1.4.1rc2/numpy/distutils/ccompiler.py, line 64, in 
CCompiler_spawn
raise DistutilsExecError,\


  Nadav
winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Annoyance of memap rraywithmultiprocessing.Pool.applay_async

2010-04-05 Thread Nadav Horesh
Is there a way to use memory mapped files as if they were shared memory? I made 
an application in which some (very often non contiguous) parts of a memmap 
array are processed by different processors. However I might use shared memory 
array instead. I wonder, since both types share common properties, if there a 
way to interchange then transparently.


  Nadav

-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Robert Kern
Sent: Sun 04-Apr-10 18:45
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] Annoyance of memap 
rraywithmultiprocessing.Pool.applay_async
 
On Sat, Apr 3, 2010 at 22:35, Nadav Horesh nad...@visionsense.com wrote:
 Got it, thank you.
 But why, nevertheless, the results are correct although the pickling is 
 impossible?

Rather, I meant that they don't pickle correctly. They use ndarray's
pickling, which will copy the data, and then reconstruct an ndarray on
the other side and just change the type to memmap without actually
memory-mapping the file. Thus you have a __del__ method referring to
attributes that haven't been set up.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Annoyance of memap rray with multiprocessing.Pool.applay_async

2010-04-03 Thread Nadav Horesh

The following script generate the following error on every loop iteration in 
the function average:

Exception AttributeError: AttributeError('NoneType' object has no attribute 
'tell',) in bound method memmap.__del__ of memmap(xx) ignored

where xx is a scalar (the array sum).

I get this error with numpy1.4 on a linux64 (dual core) machine. A 
winXP/Pentium4 with 2GB Ram could not run it since it explode the memory.

 Any idea what is the origin on the error (my interset is the linux box)?

BTW if in the function average the 2nd line is commented and the 3rd line is 
uncommented I get no error on linux, but the win32 problem pertains.


import numpy as N
import multiprocessing as MP
import sys

try:
count = int(sys.argv[1])
except:
count = 4
filename = '%dx100x100_int32.dat' % count

def average(cube):
   return [plane.mean() for plane in cube]
#return [N.asarray(plane).mean() for plane in cube]


data = N.memmap(filename, dtype=N.int32, shape=(count,100,100))

pool = MP.Pool(processes=1)

job = pool.apply_async(average, [data,])
print job.get()
import numpy as N
import multiprocessing as MP

count = 4
filename = '%dx100x100_int32.dat' % count

def average(cube):
   return [plane.mean() for plane in cube]
#return [N.asarray(plane).mean() for plane in cube]


data = N.memmap(filename, dtype=N.int32, shape=(count,100,100))

pool = MP.Pool(processes=1)

job = pool.apply_async(average, [data,])
print job.get()


   Nadav
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Annoyance of memap rray withmultiprocessing.Pool.applay_async

2010-04-03 Thread Nadav Horesh
Got it, thank you.
But why, nevertheless, the results are correct although the pickling is 
impossible?

   Nadav.


-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Robert Kern
Sent: Sat 03-Apr-10 23:47
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] Annoyance of memap rray 
withmultiprocessing.Pool.applay_async
 
On Sat, Apr 3, 2010 at 14:29, Nadav Horesh nad...@visionsense.com wrote:

 The following script generate the following error on every loop iteration in 
 the function average:

 Exception AttributeError: AttributeError('NoneType' object has no attribute 
 'tell',) in bound method memmap.__del__ of memmap(xx) ignored

 where xx is a scalar (the array sum).

 I get this error with numpy1.4 on a linux64 (dual core) machine. A 
 winXP/Pentium4 with 2GB Ram could not run it since it explode the memory.

  Any idea what is the origin on the error (my interset is the linux box)?


memmap instances don't pickle. Don't pass them as arguments to
apply_async() or related functions. Instead, pass the filename and
other arguments to memmap() and reconstruct the arrays in each
process.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this odd?

2010-04-02 Thread Nadav Horesh
In python empty sequences are always equivalent to False, and non-empty to 
True. You can use this property or:

if len(b)  0:
   .

  Nadav 


-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Shailendra
Sent: Fri 02-Apr-10 06:07
To: numpy-discussion@scipy.org
Subject: [Numpy-discussion] Is this odd?
 
Hi All,
Below is some array behaviour which i think is odd
 a=arange(10)
 a
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
 b=nonzero(a0)
 b
(array([], dtype=int32),)
 if not b[0]:
... print 'b[0] is false'
...
b[0] is false

Above case the b[0] is empty so it is fine it is considered false

 b=nonzero(a1)
 b
(array([0]),)
 if not b[0]:
... print 'b[0] is false'
...
b[0] is false

Above case b[0] is a non-empty array. Why should this be consider false.

 b=nonzero(a8)
 b
(array([9]),)
 if not b[0]:
... print 'b[0] is false'
...

Above case b[0] is non-empty and should be consider true.Which it does.

I don't understand why non-empty array should not be considered true
irrespective to what value they have.
Also, please suggest the best way to differentiate between an empty
array and non-empty array( irrespective to what is inside array).

Thanks,
Shailendra
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Applying formula to all in an array which hasvalue from previous

2010-03-29 Thread Nadav Horesh
The general guideline:

Suppose the function definition is:

def func(x,y):
# x and y are scalars
bla bla bla ...
return z # a scalar

So,

import numpy as np

vecfun = np.vectorize(func)

vecfun.ufunc.accumulate(array((0,1,2,3,4,5,6,7,8,9))


   Nadav.


-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Vishal Rana
Sent: Sun 28-Mar-10 21:19
To: Discussion of Numerical Python
Subject: [Numpy-discussion] Applying formula to all in an array which hasvalue 
from previous
 
Hi,

For a numpy array:

array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])

I do some calculation with 0, 1... and get a value = 2.5, now use this value
to do the repeat the same calculation with next element for example...
2.5, 2 and get a value = 3.1
3.1, 3 and get a value = 4.2
4.2, 4 and get a value = 5.1

 and get a value = 8.5
8.5, 9 and get a value = 9.8

So I should be getting a new array like array([0, 2.5, 3.1, 4.2, 5.1, .
8.5,9.8])

Is it where numpy or scipy can help?

Thanks
Vishal

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] StringIO test failure with Python3.1.2

2010-03-24 Thread Nadav Horesh
Any idea why

  from .io import StringIO

and not

  from io import StringIO

???

(Why is the extra . before io)

  Nadav


-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Bruce Southey
Sent: Wed 24-Mar-10 16:17
To: Discussion of Numerical Python
Subject: [Numpy-discussion] StringIO test failure with Python3.1.2
 
Hi,
Wow, this is really impressive!
I installed the svn numpy version '2.0.0.dev8300' with the latest Python 
3.1.2 and it works!

All the tests pass except:
test_utils.test_lookfor

I am guessing that it is this line as the other io imports do not have 
the period.
from .io import StringIO

==
ERROR: test_utils.test_lookfor
--
Traceback (most recent call last):
   File /usr/local/lib/python3.1/site-packages/nose/case.py, line 177, 
in runTest
 self.test(*self.arg)
   File 
/usr/local/lib/python3.1/site-packages/numpy/lib/tests/test_utils.py, 
line 10, in test_lookfor
 import_modules=False)
   File /usr/local/lib/python3.1/site-packages/numpy/lib/utils.py, 
line 751, in lookfor
 cache = _lookfor_generate_cache(module, import_modules, regenerate)
   File /usr/local/lib/python3.1/site-packages/numpy/lib/utils.py, 
line 852, in _lookfor_generate_cache
 from .io import StringIO
ImportError: cannot import name StringIO

--
Ran 2898 tests in 24.646s

FAILED (KNOWNFAIL=5, errors=1)
nose.result.TextTestResult run=2898 errors=1 failures=0

Bruce
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] multiprocessing shared arrays and numpy

2010-03-11 Thread Nadav Horesh
Here is a strange thing I am getting with multiprocessing and memory mapped 
array:

The below script generates the error message 30 times (for every slice access):

Exception AttributeError: AttributeError('NoneType' object has no attribute 
'tell',) in bound method memmap.__del__ of memmap(2949995000.0) ignored


Although I get the correct answer eventually.
--
import numpy as N
import multiprocessing as MP

def average(cube):
return [plane.mean() for plane in cube]

N.arange(30*100*100, dtype=N.int32).tofile(open('30x100x100_int32.dat','w'))

data = N.memmap('30x100x100_int32.dat', dtype=N.int32, shape=(30,100,100))

pool = MP.Pool(processes=1)

job = pool.apply_async(average, [data,])
print job.get()

--

I use python 2.6.4 and numpy 1.4.0 on 64 bit linux (amd64)

  Nadav


-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Gael Varoquaux
Sent: Thu 11-Mar-10 11:36
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] multiprocessing shared arrays and numpy
 
On Thu, Mar 11, 2010 at 10:04:36AM +0100, Francesc Alted wrote:
 As far as I know, memmap files (or better, the underlying OS) *use* all 
 available RAM for loading data until RAM is exhausted and then start to use 
 SWAP, so the memory pressure is still there.  But I may be wrong...

I believe that your above assertion is 'half' right. First I think that
it is not SWAP that the memapped file uses, but the original disk space,
thus you avoid running out of SWAP. Second, if you open several times the
same data without memmapping, I believe that it will be duplicated in
memory. On the other hand, when you memapping, it is not duplicated, thus
if you are running several processing jobs on the same data, you save
memory. I am very much in this case.

Gaël
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] multiprocessing shared arrays and numpy

2010-03-06 Thread Nadav Horesh
I did some optimization, and the results are very instructive, although not 
surprising:
javascript:SetCmd(cmdSend);
As I wrote before, I processed stereoscopic movie recordings, by making each a 
memory mapped file and processing it in several steps. By this way I produced 
extra GB of transient data. Running as one process took 45 seconds, and in dual 
parallel process ~40 seconds.

 After rewriting the application to process the  recording frame by frame. The 
code became shorter and the new scores are: One process --- 16 seconds, and 
dual process --- 9 seconds.

What I learned:
 * Design for multi-procssing from the start, not as afterthought
 * Shared memory works, but on the expense of code elegance (much like common 
blocks in fortran)
 * Memory mapped files can be used much as shared memory. The strange thing is 
that I got an ignored AttributeError on every frame access to the memory mapped 
file from the child process.

Nadav

-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Brian Granger
Sent: Fri 05-Mar-10 21:29
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] multiprocessing shared arrays and numpy
 
Francesc,

Yeah, 10% of improvement by using multi-cores is an expected figure for
 memory
 bound problems.  This is something people must know: if their computations
 are
 memory bound (and this is much more common that one may initially think),
 then
 they should not expect significant speed-ups on their parallel codes.


+1

Thanks for emphasizing this.  This is definitely a big issue with multicore.

Cheers,

Brian



 Thanks for sharing your experience anyway,
 Francesc

 A Thursday 04 March 2010 18:54:09 Nadav Horesh escrigué:
  I can not give a reliable answer yet, since I have some more improvement
 to
   make. The application is an analysis of a stereoscopic-movie raw-data
   recording (both channels are recorded in the same file). I treat the
 data
   as a huge memory mapped file. The idea was to process each channel (left
   and right) on a different core. Right now the application is IO bounded
   since I do classical numpy operation, so each channel (which is handled
 as
   one array) is scanned several time. The improvement now over a single
   process is 10%, but I hope to achieve 10% ore after trivial
 optimizations.
 
   I used this application as an excuse to dive into multi-processing. I
 hope
   that the code I posted here would help someone.
 
Nadav.
 
 
  -Original Message-
  From: numpy-discussion-boun...@scipy.org on behalf of Francesc Alted
  Sent: Thu 04-Mar-10 15:12
  To: Discussion of Numerical Python
  Subject: Re: [Numpy-discussion] multiprocessing shared arrays and numpy
 
  What kind of calculations are you doing with this module?  Can you please
   send some examples and the speed-ups you are getting?
 
  Thanks,
  Francesc
 
  A Thursday 04 March 2010 14:06:34 Nadav Horesh escrigué:
   Extended module that I used for some useful work.
   Comments:
 1. Sturla's module is better designed, but did not work with very
 large
(although sub GB) arrays 2. Tested on 64 bit linux (amd64) +
   python-2.6.4 + numpy-1.4.0
  
 Nadav.
  
  
   -Original Message-
   From: numpy-discussion-boun...@scipy.org on behalf of Nadav Horesh
   Sent: Thu 04-Mar-10 11:55
   To: Discussion of Numerical Python
   Subject: RE: [Numpy-discussion] multiprocessing shared arrays and numpy
  
   Maybe the attached file can help. Adpted and tested on amd64 linux
  
 Nadav
  
  
   -Original Message-
   From: numpy-discussion-boun...@scipy.org on behalf of Nadav Horesh
   Sent: Thu 04-Mar-10 10:54
   To: Discussion of Numerical Python
   Subject: Re: [Numpy-discussion] multiprocessing shared arrays and numpy
  
   There is a work by Sturla Molden: look for multiprocessing-tutorial.pdf
   and sharedmem-feb13-2009.zip. The tutorial includes what is dropped in
   the cookbook page. I am into the same issue and going to test it today.
  
 Nadav
  
   On Wed, 2010-03-03 at 15:31 +0100, Jesper Larsen wrote:
Hi people,
   
I was wondering about the status of using the standard library
multiprocessing module with numpy. I found a cookbook example last
updated one year ago which states that:
   
This page was obsolete as multiprocessing's internals have changed.
More information will come shortly; a link to this page will then be
added back to the Cookbook.
   
http://www.scipy.org/Cookbook/multiprocessing
   
I also found the code that used to be on this page in the cookbook
 but
it does not work any more. So my question is:
   
Is it possible to use numpy arrays as shared arrays in an application
using multiprocessing and how do you do it?
   
Best regards,
Jesper
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Re: [Numpy-discussion] multiprocessing shared arrays and numpy

2010-03-04 Thread Nadav Horesh
Maybe the attached file can help. Adpted and tested on amd64 linux

  Nadav


-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Nadav Horesh
Sent: Thu 04-Mar-10 10:54
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] multiprocessing shared arrays and numpy
 
There is a work by Sturla Molden: look for multiprocessing-tutorial.pdf
and sharedmem-feb13-2009.zip. The tutorial includes what is dropped in
the cookbook page. I am into the same issue and going to test it today.

  Nadav


On Wed, 2010-03-03 at 15:31 +0100, Jesper Larsen wrote:
 Hi people,
 
 I was wondering about the status of using the standard library
 multiprocessing module with numpy. I found a cookbook example last
 updated one year ago which states that:
 
 This page was obsolete as multiprocessing's internals have changed.
 More information will come shortly; a link to this page will then be
 added back to the Cookbook.
 
 http://www.scipy.org/Cookbook/multiprocessing
 
 I also found the code that used to be on this page in the cookbook but
 it does not work any more. So my question is:
 
 Is it possible to use numpy arrays as shared arrays in an application
 using multiprocessing and how do you do it?
 
 Best regards,
 Jesper
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] multiprocessing shared arrays and numpy

2010-03-04 Thread Nadav Horesh
Extended module that I used for some useful work.
Comments:
  1. Sturla's module is better designed, but did not work with very large 
(although sub GB) arrays
  2. Tested on 64 bit linux (amd64) + python-2.6.4 + numpy-1.4.0

  Nadav.


-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Nadav Horesh
Sent: Thu 04-Mar-10 11:55
To: Discussion of Numerical Python
Subject: RE: [Numpy-discussion] multiprocessing shared arrays and numpy
 
Maybe the attached file can help. Adpted and tested on amd64 linux

  Nadav


-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Nadav Horesh
Sent: Thu 04-Mar-10 10:54
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] multiprocessing shared arrays and numpy
 
There is a work by Sturla Molden: look for multiprocessing-tutorial.pdf
and sharedmem-feb13-2009.zip. The tutorial includes what is dropped in
the cookbook page. I am into the same issue and going to test it today.

  Nadav


On Wed, 2010-03-03 at 15:31 +0100, Jesper Larsen wrote:
 Hi people,
 
 I was wondering about the status of using the standard library
 multiprocessing module with numpy. I found a cookbook example last
 updated one year ago which states that:
 
 This page was obsolete as multiprocessing's internals have changed.
 More information will come shortly; a link to this page will then be
 added back to the Cookbook.
 
 http://www.scipy.org/Cookbook/multiprocessing
 
 I also found the code that used to be on this page in the cookbook but
 it does not work any more. So my question is:
 
 Is it possible to use numpy arrays as shared arrays in an application
 using multiprocessing and how do you do it?
 
 Best regards,
 Jesper
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion




winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] multiprocessing shared arrays and numpy

2010-03-04 Thread Nadav Horesh
I can not give a reliable answer yet, since I have some more improvement to 
make.
The application is an analysis of a stereoscopic-movie raw-data recording (both 
channels are recorded in the same file). I treat the data as a huge memory 
mapped file. The idea was to process each channel (left and right) on a 
different core. Right now the application is IO bounded since I do classical 
numpy operation, so each channel (which is handled as one array) is scanned 
several time. The improvement now over a single process is 10%, but I hope to 
achieve 10% ore after trivial optimizations.

 I used this application as an excuse to dive into multi-processing. I hope 
that the code I posted here would help someone.

  Nadav.


-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Francesc Alted
Sent: Thu 04-Mar-10 15:12
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] multiprocessing shared arrays and numpy
 
What kind of calculations are you doing with this module?  Can you please send 
some examples and the speed-ups you are getting?

Thanks,
Francesc

A Thursday 04 March 2010 14:06:34 Nadav Horesh escrigué:
 Extended module that I used for some useful work.
 Comments:
   1. Sturla's module is better designed, but did not work with very large
  (although sub GB) arrays 2. Tested on 64 bit linux (amd64) + python-2.6.4
  + numpy-1.4.0
 
   Nadav.
 
 
 -Original Message-
 From: numpy-discussion-boun...@scipy.org on behalf of Nadav Horesh
 Sent: Thu 04-Mar-10 11:55
 To: Discussion of Numerical Python
 Subject: RE: [Numpy-discussion] multiprocessing shared arrays and numpy
 
 Maybe the attached file can help. Adpted and tested on amd64 linux
 
   Nadav
 
 
 -Original Message-
 From: numpy-discussion-boun...@scipy.org on behalf of Nadav Horesh
 Sent: Thu 04-Mar-10 10:54
 To: Discussion of Numerical Python
 Subject: Re: [Numpy-discussion] multiprocessing shared arrays and numpy
 
 There is a work by Sturla Molden: look for multiprocessing-tutorial.pdf
 and sharedmem-feb13-2009.zip. The tutorial includes what is dropped in
 the cookbook page. I am into the same issue and going to test it today.
 
   Nadav
 
 On Wed, 2010-03-03 at 15:31 +0100, Jesper Larsen wrote:
  Hi people,
 
  I was wondering about the status of using the standard library
  multiprocessing module with numpy. I found a cookbook example last
  updated one year ago which states that:
 
  This page was obsolete as multiprocessing's internals have changed.
  More information will come shortly; a link to this page will then be
  added back to the Cookbook.
 
  http://www.scipy.org/Cookbook/multiprocessing
 
  I also found the code that used to be on this page in the cookbook but
  it does not work any more. So my question is:
 
  Is it possible to use numpy arrays as shared arrays in an application
  using multiprocessing and how do you do it?
 
  Best regards,
  Jesper
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 

-- 
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] multiprocessing shared arrays and numpy

2010-03-03 Thread Nadav Horesh
There is a work by Sturla Molden: look for multiprocessing-tutorial.pdf
and sharedmem-feb13-2009.zip. The tutorial includes what is dropped in
the cookbook page. I am into the same issue and going to test it today.

  Nadav


On Wed, 2010-03-03 at 15:31 +0100, Jesper Larsen wrote:
 Hi people,
 
 I was wondering about the status of using the standard library
 multiprocessing module with numpy. I found a cookbook example last
 updated one year ago which states that:
 
 This page was obsolete as multiprocessing's internals have changed.
 More information will come shortly; a link to this page will then be
 added back to the Cookbook.
 
 http://www.scipy.org/Cookbook/multiprocessing
 
 I also found the code that used to be on this page in the cookbook but
 it does not work any more. So my question is:
 
 Is it possible to use numpy arrays as shared arrays in an application
 using multiprocessing and how do you do it?
 
 Best regards,
 Jesper
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Request for testing

2010-02-21 Thread Nadav Horesh

$ python isinf.py 
Warning: invalid value encountered in isinf
True

machine: gentoo linux on amd64 
python 2.6.4 (64 bit)
gcc 4.3.4
numpy.__version__ == '1.4.0'
glibc 2.10.1

  Nadav


-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Charles R Harris
Sent: Sun 21-Feb-10 12:30
To: numpy-discussion
Subject: [Numpy-discussion] Request for testing
 
Hi All,

I would be much obliged if some folks would run the attached script and
report the output, numpy version, and python version. It just runs
np.isinf(np.inf), which raises an invalid value warning with current
numpy. As far as I can see the function itself hasn't changed since
numpy1.3, yet numpy1.3  python2.5 gives no such warning.

Chuck

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] extracting data from ODF files

2010-01-05 Thread Nadav Horesh
There is a possibility to export the data to excel format and use xlrd or 
similar package to read it.

  Nadav


-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Manuel Wittchen
Sent: Tue 05-Jan-10 23:14
To: Discussion of Numerical Python
Subject: [Numpy-discussion]  extracting data from ODF files
 
Hi,

is there a (simple) solution to extract data from OpenDocument files
(espacially OpenOffice.org Calc files) into a Numpy Array? At the
moment I copy the colums from OO.org Calc manually into a
tab-separatet Plaintext file which is quite annoying.

Regards,
Manuel Wittchen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Matlab's griddata3 for numpy?

2009-12-23 Thread Nadav Horesh
You probably have to use the generic interpolation function from 
scipy.interpolate module:
 scipy.interpolate.splprep,  scipy.interpolate.splev, etc.

It could be cumbersome but doable.

   Nadav


-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of reckoner
Sent: Wed 23-Dec-09 16:12
To: numpy-discussion@scipy.org
Subject: [Numpy-discussion] Matlab's griddata3 for numpy?
 
Hi,

I realize that there is a griddata for numpy via matplotlib, but is 
there a griddata3 (same has griddata, but for higher dimensions).

Any help appreciated.


winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] small doc error in numpy.random.randn

2009-12-15 Thread Nadav Horesh

The 2nd line of the doc string

randn([d1, ..., dn])

should be 
randn(d1, ..., dn)

 Nadav
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


  1   2   3   >