[Numpy-discussion] [ANN] PyOpenCL 0.90 - a Python interface for OpenCL

2009-08-28 Thread Andreas Klöckner
What is it?
---

PyOpenCL makes the industry-standard OpenCL compute abstraction available from 
Python.

PyOpenCL has been tested to work with AMD's and Nvidia's OpenCL 
implementations and allows complete access to all features of the standard, 
from a nice, Pythonic interface.

Where can I get it?
---

Homepage: http://mathema.tician.de/software/pyopencl

Download: http://pypi.python.org/pypi/pyopencl
Documentation: http://documen.tician.de/pyopencl
Wiki: http://wiki.tiker.net/PyOpenCL

Main Features
-

* Object cleanup tied to lifetime of objects. This idiom, often called RAII in 
C++, makes it much easier to write correct, leak- and crash-free code.

* Completeness. PyOpenCL puts the full power of OpenCL’s API at your disposal, 
if you wish. Every obscure get_info() query and all CL calls are accessible.

* Automatic Error Checking. All errors are automatically translated into 
Python exceptions.

* Speed. PyOpenCL’s base layer is written in C++, so all the niceties above 
are virtually free.

* Helpful, complete Documentation

If that sounds similar to PyOpenCL's sister project PyCUDA [1], that is not 
entirely a coincidence. :)

License
---

PyOpenCL is open-source under the MIT/X11 license and free for commercial, 
academic, and private use.

Andreas

[1] http://mathema.tician.de/software/pycuda


signature.asc
Description: This is a digitally signed message part.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Should object arrays have a buffer interface?

2008-12-29 Thread Andreas Klöckner
On Montag 29 Dezember 2008, Robert Kern wrote:
 You could wrap the wrappers in Python and check the dtype. You'd have
 a similar bug if you passed a wrong non-object dtype, too.
 Checking/communicating the dtype is something you always have to do
 when using the 2.x buffer protocol. I'm inclined not to make object a
 special case. When you ask for the raw bytes, you should get the raw
 bytes.

Ok, fair enough.

Andreas


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Should object arrays have a buffer interface?

2008-12-28 Thread Andreas Klöckner
Hi all,

I don't think PyObject pointers should be accessible via the buffer interface. 
I'd throw an error, but maybe a (silenceable) warning would do. Would have 
saved me some bug-hunting.

 import numpy
 numpy.array([55, (33,)], dtype=object)
 x = numpy.array([55, (33,)], dtype=object)
 x
array([55, (33,)], dtype=object)
 buffer(x)
read-only buffer for 0x8496f48, size -1, offset 0 at 0x850b060
 str(buffer(x))
'\xb0\x1c\x17\x08l\x89\xd7\xb7'
 numpy.__version__
'1.1.0'

Opinions?

Andreas


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Should object arrays have a buffer interface?

2008-12-28 Thread Andreas Klöckner
On Montag 29 Dezember 2008, Robert Kern wrote:
 On Sun, Dec 28, 2008 at 19:23, Andreas Klöckner li...@informa.tiker.net 
wrote:
  Hi all,
 
  I don't think PyObject pointers should be accessible via the buffer
  interface. I'd throw an error, but maybe a (silenceable) warning would
  do. Would have saved me some bug-hunting.

 Can you describe in more detail what problem it caused?

Well, I'm a little bit embarrassed. :) But here goes.

I have one-line MPI wrappers that build on Boost.MPI and Boost.Python. They 
take a numpy array, obtain its buffer, and shove that into Boost.MPI's 
isend(). My code does some sort of term evaluation, and instead of shoving the 
evaluated floating point vector into MPI, it instead used the (un-evaluated) 
symbolic vector, which is represented as an object array. My MPI wrapper 
happily handed that object array's buffer to MPI. Oddly, instead of the 
deserved segfault, I just got garbage data on the other end. (Well, some other 
machine's PyObject pointers, really.)

I guess I'm wishing I would've been prevented from falling into that trap, and 
I ended up wondering if there actually is a legitimate use of the buffer 
interface for object arrays.

Andreas



signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Should object arrays have a buffer interface?

2008-12-28 Thread Andreas Klöckner
On Montag 29 Dezember 2008, Robert Kern wrote:
 On Sun, Dec 28, 2008 at 20:38, Andreas Klöckner li...@informa.tiker.net 
wrote:
  On Montag 29 Dezember 2008, Robert Kern wrote:
  On Sun, Dec 28, 2008 at 19:23, Andreas Klöckner
  li...@informa.tiker.net
 
  wrote:
   Hi all,
  
   I don't think PyObject pointers should be accessible via the buffer
   interface. I'd throw an error, but maybe a (silenceable) warning would
   do. Would have saved me some bug-hunting.
 
  Can you describe in more detail what problem it caused?
 
  Well, I'm a little bit embarrassed. :) But here goes.
 
  I have one-line MPI wrappers that build on Boost.MPI and Boost.Python.
  They take a numpy array, obtain its buffer, and shove that into
  Boost.MPI's isend().

 How do you communicate the dtype?

I don't. The app is a PDE solver, both ends are working at the same (known) 
precision. Passing an object array was completely wrong, but since my wrapper 
functions only deal with the buffer API, they couldn't really do the checking.

Andreas




signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANN: PyCuda

2008-06-22 Thread Andreas Klöckner
Hi all,

I am happy to announce the availability of PyCuda [1,8], which is a 
value-added Python wrapper around Nvidia's CUDA [2] GPU Computation 
framework. In the presence of other wrapping modules [3,4], why would you 
want to use PyCuda?

* It's designed to work and interact with numpy.

* RAII, [5] i.e. object cleanup is tied to lifetime of objects. This idiom 
makes it much easier to write correct, leak- and crash-free code. PyCuda 
knows about liftime dependencies, too, so (for example) it won’t detach from 
a context before all memory allocated in it is also freed.

* Convenience. Abstractions like pycuda.driver.SourceModule [6] and 
pycuda.gpuarray.GPUArray [7] make CUDA programming even more convenient than 
with Nvidia’s C-based runtime.

* Completeness. PyCuda puts the full power of CUDA’s driver API at your 
disposal, if you wish.

* Automatic Error Checking. All CUDA errors are automatically translated into 
Python exceptions.

* Speed. PyCuda’s base layer is written in C++, so all the niceties above are 
virtually free.

* Helpful documentation [8] with plenty of examples.

If you run into any issues using the code, don't hesitate to post here or get 
in touch.

Andreas

[1] http://mathema.tician.de/software/pycuda
[2] http://nvidia.com/cuda
[3] http://code.google.com/p/pystream/
[4] ftp://ftp.graviscom.com/pub/code/python-cuda
[5] http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization
[6] http://tiker.net/doc/pycuda/driver.html#pycuda.driver.SourceModule
[7] http://tiker.net/doc/pycuda/array.html#pycuda.gpuarray.GPUArray
[8] http://tiker.net/doc/pycuda -- click here!


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: PyCuda

2008-06-22 Thread Andreas Klöckner
On Sonntag 22 Juni 2008, Kevin Jacobs [EMAIL PROTECTED] wrote:
 Thanks for the clarification.  That makes perfect sense.  Do you have any
 feelings on the relative performance of GPUArray versus CUBLAS?

Same. If you check out the past version of PyCuda that still has CUBLAS, there 
are files test/test_{cublas,gpuarray}_speed.py. In fact, since CUBLAS does 
not implement three-operand z = x + y, it requires an extra copy that 
GPUArray can avoid. If you're into lies, damned lies and benchmarks, you 
could say that GPUArray is actually twice as fast.  :)

 The first  part of install.rst still says: This tutorial will walk you
 through the process of building PyUblas.

Oops. Thanks. Fixed.

Andreas


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Fancy index assign ignores extra assignees

2008-06-19 Thread Andreas Klöckner
Hi all,

Is this supposed to be like that, i.e. is the fancy __setitem__ supposed to 
not complain about unused assignees?

 v = zeros((10,))
 z = [1,2,5]
 v[z] = [1,2,4,5]
 v
array([ 0.,  1.,  2.,  0.,  0.,  4.,  0.,  0.,  0.,  0.])

Contrast with:

 v[1:3] = [1,2,3,4]
Traceback (most recent call last):
  File console, line 1, in module
ValueError: shape mismatch: objects cannot be broadcast to a single shape

Andreas


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] embedded arrays

2008-06-07 Thread Andreas Klöckner
On Freitag 06 Juni 2008, Thomas Hrabe wrote:
 Furthermore, I sometimes get a
 Segmentation fault
 Illegal instruction

 and sometimes it works

 It might be a memory leak, due to the segfault and the arbitrary behavior.?

Shameless plug: PyUblas [1] will take care of the nasty bits of wrapping C++ 
code for numpy. Including getting the refcounting right. :)

Andreas

[1] http://tiker.net/doc/pyublas/


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Starting to work on runtime plugin system for plugin (automatic sse optimization, etc...)

2008-04-29 Thread Andreas Klöckner
On Dienstag 29 April 2008, Lisandro Dalcin wrote:
 Your implementation make uses of low level dlopening. Then, your are
 going to have to manage all the oddities of runtime loading in the
 different systems.

Argh. -1 for a hard dependency on dlopen(). At some point in my life, I might 
be forced to compile numpy on an IBM Bluegene/L, which does *not* have 
dynamic linking at all. (Btw, anybody done something like this before?)

Andreas


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Starting to work on runtime plugin system for plugin (automatic sse optimization, etc...)

2008-04-29 Thread Andreas Klöckner
On Dienstag 29 April 2008, David Cournapeau wrote:
 Andreas Klöckner wrote:
  Argh. -1 for a hard dependency on dlopen().

 There is no hard dependency on dlopen, there is a hard dependency on
 runtime loading, because well, that's the point of a plugin system. It
 should not be difficult to be able to disable the plugin system for
 platforms who do not support it, though (and do as today), but I am not
 sure it is really useful.

As long as it's easy to disable (for example with a preprocessor define), I 
guess I'm ok.

  At some point in my life, I might
  be forced to compile numpy on an IBM Bluegene/L, which does *not* have
  dynamic linking at all. (Btw, anybody done something like this before?)

 How will you build numpy in the case of a system without dynamic linking
 ? The only solution is then to build numpy and link it statically to the
 python interpreter. Systems without dynamic linking are common (embedded
 systems), though.

Yes, obviously everything will need to be linked into one big static 
executable blob. I am somewhat certain that distutils will be of no help 
there, so I will need to roll my own. There is a CMake-based build of 
Python for BG/L, I was planning to work off that.

But so far, I might not end up having to do all that, for which I'd be 
endlessly grateful.

Andreas


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Starting to work on runtime plugin system for plugin (automatic sse optimization, etc...)

2008-04-29 Thread Andreas Klöckner
On Dienstag 29 April 2008, David Cournapeau wrote:
 Andreas Klöckner wrote:
  Yes, obviously everything will need to be linked into one big static
  executable blob. I am somewhat certain that distutils will be of no help
  there, so I will need to roll my own. There is a CMake-based build of
  Python for BG/L, I was planning to work off that.

 You will have to build numpy too. Not that I want to discourage you, but
 that will be a hell of a work.

Good news is that Bluegene/P (the next version of that architecture) *does* 
support dynamic linking. It's probably broken in some obscure way, but that's 
(hopefully) better than not exsitent. :)

In any case, if I can't dodge porting my code to BG/L, you'll hear from me. :)

Andreas


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] access ndarray in C++

2008-04-23 Thread Andreas Klöckner
On Mittwoch 23 April 2008, Christopher Barker wrote:
 NOTE:
 Most folks now think that the pain of writing extensions completely by
 hand is not worth it -- it's just too easy to make reference counting
 mistakes, etc. Most folks are now using one of:

 Cython (or Pyrex)
 SWIG
 ctypes

IMO, all of these deal better with C than they do with C++. There is also a 
number of more C++-affine solutions:

- Boost Python [1]. Especially if you want usable C++ integration. (ie. more 
than basic templates, etc.)

- sip [2]. Used for PyQt.

Andreas

[1] http://www.boost.org/doc/libs/1_35_0/libs/python/doc/index.html
[2] http://www.riverbankcomputing.co.uk/sip/index.php


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] access ndarray in C++

2008-04-23 Thread Andreas Klöckner
On Mittwoch 23 April 2008, Christopher Barker wrote:
 What's the status of the Boost array object? maintained? updated for
 recent numpy?

The numeric.hpp included in Boost.Python is a joke. It does not use the native 
API.

PyUblas [1] fills this gap, by allowing you to use Boost.Ublas on the C++ side 
and Numpy on the Python side. It is somewhat like what Hoyt describes, except 
for a different environment. Here's a table:

   | Hoyt   | Andreas
---++
C++ Matrix Library | Blitz++| Boost.Ublas
Wrapper Generator  | Weave  | Boost.Python
Wrapper| w_wrap.tgz | PyUblas

:) Sorry, that was too much fun to pass up.

[1] http://tiker.net/doc/pyublas/index.html

  - sip [2]. Used for PyQt.

 Any numpy-specific stuff for sip?

Not as far as I'm aware. In fact, I don't know of any uses of sip outside of 
Qt/KDE-related things.

Andreas


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy setup.py too restrictive, prevents use of fblas with cblas

2008-04-16 Thread Andreas Klöckner
On Mittwoch 16 April 2008, Stéfan van der Walt wrote:
 The inclusion of those cblas routines sounds like a good idea.  Could
 you describe which we need, and what would be required to get this
 done?

Suppose cblas gets included in numpy, but for some reason someone decides to 
link another copy of cblas with their (separate) extension. Can we be certain 
that this does not lead to crashes on any platform supported by numpy?

Andreas


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] vander() docstring

2008-04-11 Thread Andreas Klöckner
On Freitag 11 April 2008, Robert Kern wrote:
 On Thu, Apr 10, 2008 at 10:57 PM, Charles R Harris

 [EMAIL PROTECTED] wrote:
  Turns out it matches the matlab definition. Maybe we just need another
  function: vandermonde

 -1 It's needless duplication.

Agree. Let's just live with Matlab's definition.

Andreas


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] vander() docstring

2008-04-09 Thread Andreas Klöckner
On Mittwoch 26 März 2008, Charles R Harris wrote:
 The docstring is incorrect. The Vandermonde matrix produced is compatible
 with numpy polynomials that also go from high to low powers. I would have
 done it the other way round, so index matched power, but that isn't how it
 is.

Patch attached.

Andreas
Index: numpy/lib/twodim_base.py
===
--- numpy/lib/twodim_base.py	(Revision 5001)
+++ numpy/lib/twodim_base.py	(Arbeitskopie)
@@ -148,7 +148,7 @@
 X = vander(x,N=None)
 
 The Vandermonde matrix of vector x.  The i-th column of X is the
-the i-th power of x.  N is the maximum power to compute; if N is
+the N-(i+1)-th power of x.  N is the maximum power to compute; if N is
 None it defaults to len(x).
 
 


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] vander() docstring

2008-04-09 Thread Andreas Klöckner
Hi Chuck, all,

On Mittwoch 09 April 2008, Charles R Harris wrote:
 It would affect polyfit, where the powers correspond to the numpy
 polynomial coefficients. That can be fixed, and as far as I can determine
 that is the only numpy function that uses vander, but it might break some
 software out there in the wild. Maybe we can put it on the list for 1.1.
 I'd like to change numpy polynomials also, but that is probably a mod too
 far.

IMHO,

a) 1.0.5 should ship with the docstring fixed, and nothing else.

b) maybe we should deprecate the numpy.*-contained polynomial functions and 
move this stuff to numpy.poly for 1.1, and use this opportunity to fix the 
ordering in the moved functions.

?

Andreas


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] packaging scipy (was Re: Simple financial functions for NumPy)

2008-04-09 Thread Andreas Klöckner
On Mittwoch 09 April 2008, Charles R Harris wrote:
 import numpy.linalg as la ?

Yes! :)

Andreas


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] packaging scipy (was Re: Simple financial functions for NumPy)

2008-04-07 Thread Andreas Klöckner
On Montag 07 April 2008, Robert Kern wrote:
 I would prefer not to do it at all. We've just gotten people moved
 over from Numeric; I'd hate to break their trust again.

+1.

IMO, numpy has arrived at a state where there's just enough namespace clutter 
to allow most use cases to get by without importing much sub-namespace junk, 
and I think that's a good place to be (and to stay). 

For now, I'd be very careful about adding more.

   It is really nice to think about having NumPy Core, NumPy Full,
   NumPyKits, SciPy Core, SciPy Full and SciPyKits.

 Really? It gives me the shivers, frankly.

Couldn't agree more.

Andreas


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] site.cfg doesnt function?

2008-04-07 Thread Andreas Klöckner
Hi Nadav,

On Montag 07 April 2008, Nadav Horesh wrote:
 [snip]

Try something like this:

[atlas]
library_dirs = /users/kloeckner/mach/x86_64/pool/lib,/usr/lib
atlas_libs = lapack, f77blas, cblas, atlas

Andreas


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] packaging scipy (was Re: Simple financial functions for NumPy)

2008-04-07 Thread Andreas Klöckner
On Montag 07 April 2008, Stéfan van der Walt wrote:
 I wouldn't exactly call 494 functions just enough namespace clutter;
  I'd much prefer to have a clean api to work with.

Not to bicker, but...

 import numpy
 len(dir(numpy))
494
 numpy.__version__
'1.0.4'
 funcs = [s for s in dir(numpy) if type(getattr(numpy, s)) in 
[type(numpy.array), type(numpy.who)]]
 len(funcs)
251
 classes = [s for s in dir(numpy) if type(getattr(numpy, s)) == 
type(numpy.ndarray)]
 len(classes)
88
 ufuncs = [s for s in dir(numpy) if type(getattr(numpy, s)) == 
type(numpy.sin)]
 len(ufuncs)
69
 

(and, therefore, another 69 names of fluff)

I honestly don't see much of a problem.

The only things that maybe should not have been added to numpy.* are the 
polynomial functions and the convolution windows, conceptually. But in my 
book that's not big enough to even think of breaking people's code for.

Andreas
Proud Member of the Flat Earth Society


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Forcing the use of -lgfortran

2008-04-05 Thread Andreas Klöckner
Hi all,

I'm having trouble getting numpy to compile something usable on a cluster I'm 
using, in particular I see

8 -
ImportError: 
/users/kloeckner/mach/x86_64/pool/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so:
 
undefined symbol: _gfortran_st_write
8 -

Key point: I need -lgfortran on the link command line, or else I get 
unresolved symbols stemming from my LAPACK library.

But even if I add this:

8 -
[blas_opt]
libraries = f77blas, cblas, atlas,gfortran
library_dirs = /users/kloeckner/mach/x86_64/pool/lib
include_dirs = /users/kloeckner/mach/x86_64/pool/include
#
[lapack_opt]
libraries = lapack, f77blas, cblas, atlas,gfortran
library_dirs = /users/kloeckner/mach/x86_64/pool/lib
8 -

to site.cfg, numpy seemingly ignores this request and uses ATLAS's ptblas 
instead (which I positively do *not* want it to use). How can I fix this?

This is what I get for __config__.py:

8 -
blas_opt_info={'libraries': ['ptf77blas', 'ptcblas', 'atlas'], 'library_dirs': 
['/users/kloeckner/mach/x86_64/pool/lib'], 'language': 'c',
 'define_macros': [('ATLAS_INFO', '\\3.8.1\\')], 'include_dirs': 
['/users/kloeckner/mach/x86_64/pool/include']}
atlas_blas_threads_info={'libraries': 
['ptf77blas', 'ptcblas', 'atlas'], 'library_dirs': 
['/users/kloeckner/mach/x86_64/pool/lib'], 'langu
age': 'c', 'include_dirs': ['/users/kloeckner/mach/x86_64/pool/include']}
lapack_opt_info={'libraries': 
['lapack', 'ptf77blas', 'ptcblas', 'atlas'], 'library_dirs': 
['/users/kloeckner/mach/x86_64/pool/lib'], 'lan
guage': 'f77', 'define_macros': 
[('ATLAS_INFO', '\\3.8.1\\')], 'include_dirs': 
['/users/kloeckner/mach/x86_64/pool/include']}
8 -

Thanks,
Andreas


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Forcing the use of -lgfortran

2008-04-05 Thread Andreas Klöckner
I can answer my own question now:

1) Option --fcompiler=gnu95
2) Add the following to site.cfg

[atlas]
library_dirs = /users/kloeckner/mach/x86_64/pool/lib,/usr/lib
atlas_libs = lapack, f77blas, cblas, atlas

Andreas

On Sonntag 06 April 2008, Andreas Klöckner wrote:
 Hi all,

 I'm having trouble getting numpy to compile something usable on a cluster
 I'm using, in particular I see

 8 -
 ImportError:
 /users/kloeckner/mach/x86_64/pool/lib/python2.5/site-packages/numpy/linalg/
lapack_lite.so: undefined symbol: _gfortran_st_write
 8 -

 Key point: I need -lgfortran on the link command line, or else I get
 unresolved symbols stemming from my LAPACK library.

 But even if I add this:

 8 -
 [blas_opt]
 libraries = f77blas, cblas, atlas,gfortran
 library_dirs = /users/kloeckner/mach/x86_64/pool/lib
 include_dirs = /users/kloeckner/mach/x86_64/pool/include
 #
 [lapack_opt]
 libraries = lapack, f77blas, cblas, atlas,gfortran
 library_dirs = /users/kloeckner/mach/x86_64/pool/lib
 8 -

 to site.cfg, numpy seemingly ignores this request and uses ATLAS's ptblas
 instead (which I positively do *not* want it to use). How can I fix this?

 This is what I get for __config__.py:

 8 -
 blas_opt_info={'libraries': ['ptf77blas', 'ptcblas', 'atlas'],
 'library_dirs': ['/users/kloeckner/mach/x86_64/pool/lib'], 'language': 'c',
  'define_macros': [('ATLAS_INFO', '\\3.8.1\\')], 'include_dirs':
 ['/users/kloeckner/mach/x86_64/pool/include']}
 atlas_blas_threads_info={'libraries':
 ['ptf77blas', 'ptcblas', 'atlas'], 'library_dirs':
 ['/users/kloeckner/mach/x86_64/pool/lib'], 'langu
 age': 'c', 'include_dirs': ['/users/kloeckner/mach/x86_64/pool/include']}
 lapack_opt_info={'libraries':
 ['lapack', 'ptf77blas', 'ptcblas', 'atlas'], 'library_dirs':
 ['/users/kloeckner/mach/x86_64/pool/lib'], 'lan
 guage': 'f77', 'define_macros':
 [('ATLAS_INFO', '\\3.8.1\\')], 'include_dirs':
 ['/users/kloeckner/mach/x86_64/pool/include']}
 8 -

 Thanks,
 Andreas




signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] output arguments for dot(), tensordot()

2008-04-01 Thread Andreas Klöckner
Hi all,

is there a particular reason why dot() and tensordot() don't have output 
arguments?

Andreas


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] vander() docstring

2008-03-26 Thread Andreas Klöckner
Hi all,

The docstring for vander() seems to contradict what the function does. In 
particular, the columns in the vander() output seem reversed wrt its 
docstring. I feel like one of the two needs to be fixed, or is there 
something I'm not seeing?

This here is fresh from the Numpy examples page:

8 docstring ---
X = vander(x,N=None)

The Vandermonde matrix of vector x.  The i-th column of X is the
the i-th power of x.  N is the maximum power to compute; if N is
None it defaults to len(x).

8 Example -
 from numpy import *
 x = array([1,2,3,5])
 N=3
 vander(x,N) # Vandermonde matrix of the vector x
array([[ 1,  1,  1],
[ 4,  2,  1],
[ 9,  3,  1],
[25,  5,  1]])
8 -

Andreas


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] __iadd__(ndarrayint, ndarrayfloat)

2008-03-25 Thread Andreas Klöckner
On Dienstag 25 März 2008, Nadav Horesh wrote:
 scalars are immutable objects in python. Thus the += (and alike) are fake:

Again, thanks for the explanation. IMHO, whether or not they are fake is an 
implementation detail. You shouldn't have to know Python's guts to be able to 
use Numpy successfully. Even if they weren't fake, implementing my suggested 
semantics in Numpy wouldn't be particularly hard.

 [snip]
 a += 3 is really equivalent to a = a+3. 

Except when it isn't.

 [snip]
 numpy convention is consistent
 with the python's spirit.

A matter of taste.

 I really use that fact to write arr1 += 
 something, in order to be sure that the type of arr1 is conserved, and
 write arr1 = arr1+something, to allow upward type casting.

I'm not trying to make the operation itself go away. I'm trying to make the 
syntax beginner-safe. Complete loss of precision without warning is not a 
meaning that I, as a toolkit designer, would assign to an innocent-looking 
inplace operation. My hunch is that many people who start with Numpy will 
spend an hour of their lives hunting a spurious bug caused by this. I have. 
Think of the time we can save humanity. :)

Andreas


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] __iadd__(ndarrayint, ndarrayfloat)

2008-03-24 Thread Andreas Klöckner
Hi all,

I just got tripped up by this behavior in Numpy 1.0.4:

 u = numpy.array([1,3])
 v = numpy.array([0.2,0.1])
 u+=v
 u
array([1, 3])
 

I think this is highly undesirable and should be fixed, or at least warned 
about. Opinions?

Andreas


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] __iadd__(ndarrayint, ndarrayfloat)

2008-03-24 Thread Andreas Klöckner
On Montag 24 März 2008, Stéfan van der Walt wrote:
   I think this is highly undesirable and should be fixed, or at least
  warned about. Opinions?

 I know the result is surprising, but it follows logically.  You have
 created two integers in memory, and now you add 0.2 and 0.1 to both --
 not enough to flip them over to the next value.  The equivalent in C
 is roughly:

snip

Thanks for the explanation. By now I've even found the fat WARNING in the 
Numpy book. 

I understand *why* this happens, but I still don't think it's a particular 
sensible thing to do.

I found past discussion on this on the list:
http://article.gmane.org/gmane.comp.python.numeric.general/2924/match=inplace+int+float
but the issue didn't seem finally settled then. If I missed later discussions, 
please let me know.

Question: If it's a known trap, why not change it?

To me, it's the same idea as 3/4==0 in Python--if you know C, it makes sense. 
OTOH, Python itself will silently upcast on int+=float, and they underwent 
massive breakage to make 3/4==0.75.

I see 2.5 acceptable resolutions of ndarrayint += ndarrayfloat, in order 
of preference:

- Raise an error, but add a lightweight wrapper, such as 
int_array += downcast_ok(float_array)
to allow the operation anyway.

- Raise an error unconditionally, forcing the user to make a typecast copy.

- Silently upcast the target. This is no good because it breaks existing code 
non-obviously.

I'd provide a patch if there's any interest.

Andreas


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] __iadd__(ndarrayint, ndarrayfloat)

2008-03-24 Thread Andreas Klöckner
On Dienstag 25 März 2008, Travis E. Oliphant wrote:
  Question: If it's a known trap, why not change it?

 It also has useful applications.  Also, it can only happen at with a
 bump in version number to 1.1

I'm not trying to make the functionality go away. I'm arguing that

int_array += downcast_ok(float_array)

should be the syntax for it. downcast_ok could be a view of float_array's data 
with an extra flag set, or a subclass.

Andreas


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion