Re: [Numpy-discussion] Using numpydoc outside of numpy

2009-10-22 Thread David Warde-Farley
On Wed, Oct 21, 2009 at 11:13:35AM -0400, Michael Droettboom wrote:
 Sorry for the noise.  Found the instructions in HOWTO_BUILD_DOCS.txt .

Not sure if this is part of what you discovered, but numpydoc is at the Cheese 
Shop too:

http://pypi.python.org/pypi/numpydoc

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Objected-oriented SIMD API for Numpy

2009-10-22 Thread Sturla Molden
Robert Kern skrev:
 No, I think you're right. Using SIMD to refer to numpy-like
 operations is an abuse of the term not supported by any outside
 community that I am aware of. Everyone else uses SIMD to describe
 hardware instructions, not the application of a single syntactical
 element of a high level language to a non-trivial data structure
 containing lots of atomic data elements.
   
Then you should pick up a book on parallel computing.

It is common to differentiate between four classes of computers: SISD, 
MISD, SIMD, and MIMD machines.

A SISD system is the classical von Neuman machine. A MISD system is a 
pipelined von Neuman machine, for example the x86 processor.

A SIMD system is one that has one CPU dedicated to control, and a large 
collection of subordinate ALUs for computation. Each ALU has a small 
amount of private memory. The IBM Cell processor is the typical SIMD 
machine.

A special class of SIMD machines are the so-called vector machines, of 
which the most famous is the Cray C90. The MMX and SSE instructions in 
Intel Pentium processors are an example of vector instructions. Some 
computer scientists regard vector machines a subtype of MISD systems, 
orthogonal to piplines, because there are no subordinate ALUs with 
private memory.

MIMD systems multiple independent CPUs. MIMD systems comes in two 
categories: shared-memory processors (SMP) and distributed-memory 
machines (also called cluster computers). The dual- and quad-core x86 
processors are shared-memory MIMD machines.

Many people associate the word SIMD with SSE due to Intel marketing. But 
to the extent that vector machines are MISD orthogonal to piplined von 
Neuman machines, SSE cannot be called SIMD.

NumPy is a software simulated vector machine, usually executed on MISD 
hardware. To the extent that vector machines (such as SSE and C90) are 
SIMD, we must call NumPy an object-oriented SIMD library.


S.M.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Objected-oriented SIMD API for Numpy

2009-10-22 Thread Matthieu Brucher
 OK, I should have said Object-oriented SIMD API that is implemented
 using hardware SIMD instructions.

 No, I think you're right. Using SIMD to refer to numpy-like
 operations is an abuse of the term not supported by any outside
 community that I am aware of. Everyone else uses SIMD to describe
 hardware instructions, not the application of a single syntactical
 element of a high level language to a non-trivial data structure
 containing lots of atomic data elements.

I agree with Sturla, for instance nVidia GPUs do SIMD computations
with blocs of 16 values at a time, but the hardware behind can't
compute on so much data at a time. It's SIMD from our point of view,
just like Numpy does ;)

Matthieu
-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Objected-oriented SIMD API for Numpy

2009-10-22 Thread Sturla Molden
Matthieu Brucher skrev:
 I agree with Sturla, for instance nVidia GPUs do SIMD computations
 with blocs of 16 values at a time, but the hardware behind can't
 compute on so much data at a time. It's SIMD from our point of view,
 just like Numpy does ;)

   
A computer with a CPU and a GPU is a SIMD machine by definition, due to 
the single CPU and the multiple ALUs in the GPU, which are subordinate 
to the CPU. But with modern computers, these classifications becomes a 
bit unclear.

S.M.




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Objected-oriented SIMD API for Numpy

2009-10-22 Thread Sturla Molden
Mathieu Blondel skrev:
 Peter Norvig suggested to merge Numpy into Cython but he didn't
 mention SIMD as the reason (this one is from me). 

I don't know what Norvig said or meant.

However:

There is NumPy support in Cython. Cython has a general syntax applicable 
to any PEP 3118 buffer. (As NumPy is not yet PEP 3118 compliant, NumPy 
arrays are converted to Py_buffer structs behind the scenes.)

Support for optimized vector expressions might be added later. 
Currently, slicing works as with NumPy in Python, producing slice 
objects and invoking NumPy's own code, instead of being converted to 
fast inlined C.

The PEP 3118 buffer syntax in Cython can be used to port NumPy to Py3k, 
replacing the current C source. That might be what Norvig meant if he 
suggested merging NumPy into Cython.


S.M.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Objected-oriented SIMD API for Numpy

2009-10-22 Thread Mathieu Blondel
On Thu, Oct 22, 2009 at 5:05 PM, Sturla Molden stu...@molden.no wrote:
 Mathieu Blondel skrev:

 The PEP 3118 buffer syntax in Cython can be used to port NumPy to Py3k,
 replacing the current C source. That might be what Norvig meant if he
 suggested merging NumPy into Cython.

As I wrote earlier in this thread, I confused Cython and CPython. PN
was suggesting to include Numpy in the CPython  distribution (not
Cython). The reason why was also given earlier.

Mathieu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Objected-oriented SIMD API for Numpy

2009-10-22 Thread Sturla Molden
Mathieu Blondel skrev:
 As I wrote earlier in this thread, I confused Cython and CPython. PN
 was suggesting to include Numpy in the CPython  distribution (not
 Cython). The reason why was also given earlier.

   
First, that would currently not be possible, as NumPy does not support 
Py3k. Second, the easiest way to port NumPy to Py3k is Cython, which 
would prevent adoption in the Python standard library. At least they 
have to change their current policy. Also with NumPy in the standard 
library, any modification to NumPy would require a PEP.

But Python should have a PEP 3118 compliant buffer object in the 
standard library, which NumPy could subclass.

S.M.





___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Convolution of a masked array

2009-10-22 Thread Nadav Horesh

Is there a way to proper convolve a masked array with a normal (nonmasked) 
array?
My specific problem is a convolution of a 2D masked array with a separable 
kernel (a convolution with 2 1D array along each axis).

  Nadav.

  
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANN: SciPy October Sprint

2009-10-22 Thread Stéfan van der Walt
Hi all,

The weekend is just around the corner, and we're looking forward to
the sprint!  Here is the detail again:


Our patch queue keeps getting longer and longer, so here is an
opportunity to do some spring cleaning (it's spring in South Africa,
at least)!

Please join us for an October SciPy sprint:

   * Date: 24/25 October 2009 (Sat/Sun)
   * More information: http://projects.scipy.org/scipy/wiki/SciPySprint200910

We are looking for volunteers to write documentation, review code, fix
bugs or design marketing material. New contributors are most welcome,
and mentoring will be available.


See you there,

Regards
Stéfan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Optimized sum of squares

2009-10-22 Thread Gary Ruben
josef.p...@gmail.com wrote:
 Is it really possible to get the same as np.sum(a*a, axis)  with
 tensordot  if a.ndim=2 ?
 Any way I try the something_else, I get extra terms as in np.dot(a.T, a)

Just to answer this question, np.dot(a,a) is equivalent to 
np.tensordot(a,a, axis=(0,0))
but the latter is about 10x slower for me. That is, you have to specify 
the axes for both arrays for tensordot:

In [16]: a=rand(1000)

In [17]: timeit dot(a,a)
10 loops, best of 3: 3.51 µs per loop

In [18]: timeit tensordot(a,a,(0,0))
1 loops, best of 3: 37.6 µs per loop

In [19]: tensordot(a,a,(0,0))==dot(a,a)
Out[19]: True
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] why does binary_repr don't support arrays

2009-10-22 Thread Ralf Gommers
On Tue, Oct 20, 2009 at 11:17 AM, markus.proel...@ifm.com wrote:


 Hello,

 I'm always wondering why binary_repr doesn't allow arrays as input values.
 I always have to use a work around like:

 import numpy as np

 def binary_repr(arr, width=None):
 binary_list = map((lambda foo: np.binary_repr(foo, width)),
 arr.flatten())
 str_len_max = len(np.binary_repr(arr.max(), width=width))
 str_len_min = len(np.binary_repr(arr.min(), width=width))
 if str_len_max  str_len_min:
 str_len = str_len_max
 else:
 str_len = str_len_min
 binary_array = np.fromiter(binary_list, dtype='|S'+str(str_len))
 return binary_array.reshape(arr.shape)

 Is there a reason why arrays are not supported or is there another function
 that does support arrays?


Not sure if there was/is a reason, but imho it would be nice to have support
for arrays. Also in base_repr. Could you file a ticket in trac?

Cheers,
Ralf



 Thanks,

 Markus
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Objected-oriented SIMD API for Numpy

2009-10-22 Thread Gregor Thalhammer
2009/10/21 Neal Becker ndbeck...@gmail.com

 ...
  I once wrote a module that replaces the built in transcendental
  functions of numpy by optimized versions from Intels vector math
  library. If someone is interested, I can publish it. In my experience it
  was of little use since real world problems are limited by memory
  bandwidth. Therefore extending numexpr with optimized transcendental
  functions was the better solution. Afterwards I discovered that I could
  have saved the effort of the first approach since gcc is able to use
  optimized functions from Intels vector math library or AMD's math core
  library, see the doc's of -mveclibabi. You just need to recompile numpy
  with proper compiler arguments.
 

 I'm interested.  I'd like to try AMD rather than intel, because AMD is
 easier to obtain.  I'm running on intel machine, I hope that doesn't matter
 too much.

 What exactly do I need to do?

I once tried to recompile numpy with AMD's AMCL. Unfortunately I lost the
settings after an upgrade. What I remember: install AMCL, (and read the docs
;-) ), mess with the compiler args (-mveclibabi and related), link with the
AMCL. Then you get faster pow/sin/cos/exp. The transcendental functions of
AMCL also work with Intel processors with the same performance. I did not
try the Intel SVML, which belongs to the Intel compilers.
This is different to the first approach, which is a small wrapper for Intels
VML, put into a python module and which can inject it's ufuncs (via
numpy.set_numeric_ops) into numpy. If you want I can send the package per
private email.


 I see that numpy/site.cfg has an MKL section.  I'm assuming I should not
 touch that, but just mess with gcc flags?

This is for using the lapack provided by Intels MKL. These settings are not
related to the above mentioned compiler options.


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Objected-oriented SIMD API for Numpy

2009-10-22 Thread Dag Sverre Seljebotn
Robert Kern wrote:
 On Wed, Oct 21, 2009 at 22:32, Mathieu Blondel math...@mblondel.org wrote:
   
 On Thu, Oct 22, 2009 at 11:31 AM, Sturla Molden stu...@molden.no wrote:
 
 Mathieu Blondel skrev:
   
 Hello,

 About one year ago, a high-level, objected-oriented SIMD API was added
 to Mono. For example, there is a class Vector4f for vectors of 4
 floats and this class implements methods such as basic operators,
 bitwise operators, comparison operators, min, max, sqrt, shuffle
 directly using SIMD operations.
 
 I think you are confusing SIMD with Intel's MMX/SSE instruction set.
   
 OK, I should have said Object-oriented SIMD API that is implemented
 using hardware SIMD instructions.
 

 No, I think you're right. Using SIMD to refer to numpy-like
 operations is an abuse of the term not supported by any outside
 community that I am aware of. Everyone else uses SIMD to describe
 hardware instructions, not the application of a single syntactical
 element of a high level language to a non-trivial data structure
 containing lots of atomic data elements.
   
BTW, is there any term for this latter concept that's not SIMD or 
vector operation? It would be good to have a word to distinguish this 
concept from both CPU instructions and linear algebra.

(Personally I think describing NumPy as SIMD and use SSE/MMX for CPU 
instructions makes best sense, but I'm happy to yield to conventions...)

Dag Sverre

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Antwort: Re: why does binary_repr don't support arrays

2009-10-22 Thread markus . proeller
numpy-discussion-boun...@scipy.org schrieb am 22.10.2009 12:36:46:

 
 
  On Tue, Oct 20, 2009 at 11:17 AM, markus.proel...@ifm.com wrote:
  
  Hello, 
  
  I'm always wondering why binary_repr doesn't allow arrays as input 
  values. I always have to use a work around like: 
  
  import numpy as np 
  
  def binary_repr(arr, width=None): 
  binary_list = map((lambda foo: np.binary_repr(foo, width)), 
arr.flatten())
  str_len_max = len(np.binary_repr(arr.max(), width=width)) 
  str_len_min = len(np.binary_repr(arr.min(), width=width)) 
  if str_len_max  str_len_min: 
  str_len = str_len_max 
  else: 
  str_len = str_len_min 
  binary_array = np.fromiter(binary_list, dtype='|S'+str(str_len)) 
  return binary_array.reshape(arr.shape) 
  
  Is there a reason why arrays are not supported or is there another 
  function that does support arrays? 
 
 Not sure if there was/is a reason, but imho it would be nice to have
 support for arrays. Also in base_repr. Could you file a ticket in trac?
 
 Cheers,
 Ralf
  

Okay, I opened a new ticket:

http://projects.scipy.org/numpy/ticket/1270

Markus___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Objected-oriented SIMD API for Numpy

2009-10-22 Thread Robert Ferrell

On Oct 22, 2009, at 1:35 AM, Sturla Molden wrote:

 Robert Kern skrev:
 No, I think you're right. Using SIMD to refer to numpy-like
 operations is an abuse of the term not supported by any outside
 community that I am aware of. Everyone else uses SIMD to describe
 hardware instructions, not the application of a single syntactical
 element of a high level language to a non-trivial data structure
 containing lots of atomic data elements.

 Then you should pick up a book on parallel computing.

 It is common to differentiate between four classes of computers: SISD,
 MISD, SIMD, and MIMD machines.

 A SISD system is the classical von Neuman machine. A MISD system is a
 pipelined von Neuman machine, for example the x86 processor.

 A SIMD system is one that has one CPU dedicated to control, and a  
 large
 collection of subordinate ALUs for computation. Each ALU has a small
 amount of private memory. The IBM Cell processor is the typical SIMD
 machine.

 A special class of SIMD machines are the so-called vector  
 machines, of
 which the most famous is the Cray C90. The MMX and SSE instructions in
 Intel Pentium processors are an example of vector instructions. Some
 computer scientists regard vector machines a subtype of MISD systems,
 orthogonal to piplines, because there are no subordinate ALUs with
 private memory.

 MIMD systems multiple independent CPUs. MIMD systems comes in two
 categories: shared-memory processors (SMP) and distributed-memory
 machines (also called cluster computers). The dual- and quad-core x86
 processors are shared-memory MIMD machines.

 Many people associate the word SIMD with SSE due to Intel marketing.  
 But
 to the extent that vector machines are MISD orthogonal to piplined von
 Neuman machines, SSE cannot be called SIMD.

 NumPy is a software simulated vector machine, usually executed on MISD
 hardware. To the extent that vector machines (such as SSE and C90) are
 SIMD, we must call NumPy an object-oriented SIMD library.

This is not the terminology I am familiar with.  Calling NumPy an   
object-oriented SIMD library is very confusing for me.  I worked in  
the parallel computer world for a while (back in the dark ages) and  
this terminology would have been confusing to everyone I dealt with.   
I've also read many parallel computing books.  In my experience SIMD  
refers to hardware, not software.  There is no reason that NumPy can't  
be written to run great (get good speed-ups) on an 8-core shared  
memory system.  That would be a MIMD system, and there's nothing about  
it that doesn't fit with the NumPy abstraction.  And, although SIMD  
can be a subset of MIMD, there are things that can be done in NumPy  
that be parallelized on MIMD machines but not on SIMD machines (e.g.  
the NumPy vector type is flexible enough it can store a list of tasks,  
and the operations on that vector can be parallelized easily on a  
shared memory MIMD machine - task parallelism - but not on a SIMD  
machine).

If we say that  NumPy is a software simulated vector machine or an   
object-oriented SIMD library we are pigeonholing NumPy in a way which  
is too limiting and isn't accurate.  As a user it feels to me that  
NumPy is built around various algebra abstractions, many of which map  
well onto vector machine operations.  That means that many of the  
operations are amenable to efficient implementation on SIMD hardware.   
But, IMO, one of the nice features of NumPy is it is built around high- 
level operations, and I would hate to see the project go down a path  
which insists that everything in NumPy be efficient on all SIMD  
hardware.

Of course, I would also love to see implementations which take as much  
advantage of available HW as possible (e.g. exploit SIMD HW if  
available).

That's my $0.02, worth only a couple cents less than that.

-robert

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Objected-oriented SIMD API for Numpy

2009-10-22 Thread Robert Kern
On Thu, Oct 22, 2009 at 02:35, Sturla Molden stu...@molden.no wrote:
 Robert Kern skrev:
 No, I think you're right. Using SIMD to refer to numpy-like
 operations is an abuse of the term not supported by any outside
 community that I am aware of. Everyone else uses SIMD to describe
 hardware instructions, not the application of a single syntactical
 element of a high level language to a non-trivial data structure
 containing lots of atomic data elements.

 Then you should pick up a book on parallel computing.

I would be delighted to see a reference to one that refers to a high
level language's API as SIMD. Please point one out to me. It's
certainly not any of the ones I have available to me.

 It is common to differentiate between four classes of computers: SISD,
 MISD, SIMD, and MIMD machines.

 A SISD system is the classical von Neuman machine. A MISD system is a
 pipelined von Neuman machine, for example the x86 processor.

 A SIMD system is one that has one CPU dedicated to control, and a large
 collection of subordinate ALUs for computation. Each ALU has a small
 amount of private memory. The IBM Cell processor is the typical SIMD
 machine.

 A special class of SIMD machines are the so-called vector machines, of
 which the most famous is the Cray C90. The MMX and SSE instructions in
 Intel Pentium processors are an example of vector instructions. Some
 computer scientists regard vector machines a subtype of MISD systems,
 orthogonal to piplines, because there are no subordinate ALUs with
 private memory.

 MIMD systems multiple independent CPUs. MIMD systems comes in two
 categories: shared-memory processors (SMP) and distributed-memory
 machines (also called cluster computers). The dual- and quad-core x86
 processors are shared-memory MIMD machines.

 Many people associate the word SIMD with SSE due to Intel marketing. But
 to the extent that vector machines are MISD orthogonal to piplined von
 Neuman machines, SSE cannot be called SIMD.

That's a fair point, but unrelated to whether or not numpy can be
labeled SIMD. These all refer to hardware.

 NumPy is a software simulated vector machine, usually executed on MISD
 hardware. To the extent that vector machines (such as SSE and C90) are
 SIMD, we must call NumPy an object-oriented SIMD library.

numpy does not simulate anything. It is an object-oriented library.
If numpy could be said to simulate a vector machine, than just about
any object-oriented library that overloads operators could. It creates
a false equivalence between numpy and software that actually does
simulate hardware.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Objected-oriented SIMD API for Numpy

2009-10-22 Thread Robert Kern
On Thu, Oct 22, 2009 at 06:20, Dag Sverre Seljebotn
da...@student.matnat.uio.no wrote:
 Robert Kern wrote:
 On Wed, Oct 21, 2009 at 22:32, Mathieu Blondel math...@mblondel.org wrote:

 On Thu, Oct 22, 2009 at 11:31 AM, Sturla Molden stu...@molden.no wrote:

 Mathieu Blondel skrev:

 Hello,

 About one year ago, a high-level, objected-oriented SIMD API was added
 to Mono. For example, there is a class Vector4f for vectors of 4
 floats and this class implements methods such as basic operators,
 bitwise operators, comparison operators, min, max, sqrt, shuffle
 directly using SIMD operations.

 I think you are confusing SIMD with Intel's MMX/SSE instruction set.

 OK, I should have said Object-oriented SIMD API that is implemented
 using hardware SIMD instructions.


 No, I think you're right. Using SIMD to refer to numpy-like
 operations is an abuse of the term not supported by any outside
 community that I am aware of. Everyone else uses SIMD to describe
 hardware instructions, not the application of a single syntactical
 element of a high level language to a non-trivial data structure
 containing lots of atomic data elements.

 BTW, is there any term for this latter concept that's not SIMD or
 vector operation? It would be good to have a word to distinguish this
 concept from both CPU instructions and linear algebra.

Of course, vector instruction and vectorized operation sometimes
also refer to the CPU instructions. :-)

I don't think you will get much better than vectorized operation,
though. While it's ambiguous, it has a long history in the high level
language world thanks to Matlab.

 (Personally I think describing NumPy as SIMD and use SSE/MMX for CPU
 instructions makes best sense, but I'm happy to yield to conventions...)

Well, SSE/MMX is also too limiting. Altivec instructions are also in
the same class, and we should be able to use them on PPC platforms.
Regardless of the origin of the term, SIMD is used to refer to all
of these instructions in common practice. Sturla may be right in some
prescriptive sense, but descriptively, he's quite wrong.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Sphinx/Numpydoc, attributes and property

2009-10-22 Thread Fabricio Silva
It seems that either Sphinx or NumpyDoc is having troubles with property
attributes.
Considering the following piece of code in foo.py

class Profil(object):

Blabla

Attributes
--
tfin
tdeb : float
  Startpoint
pts : array
  Blabla2.
  


def __init__(self):


self.pts = np.array([[0,1]])

@property
def tfin(self):
The time horizon endpoint.
return self.pts[0,:].max()

@property
def tdeb(self):
The time horizon startpoint.
return self.pts[0,:].min()

and a foo.rst containing

:mod:`foo` -- BlaTitle
=

   .. autoclass:: foo.Profil

produces an attribute-table with only pts but without tfin and tdeb.
How can I handle this?

-- 
Fabrice Silva
Laboratory of Mechanics and Acoustics (CNRS, UPR 7051)


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Objected-oriented SIMD API for Numpy

2009-10-22 Thread Sturla Molden
Robert Kern skrev:
 I would be delighted to see a reference to one that refers to a high
 level language's API as SIMD. Please point one out to me. It's
 certainly not any of the ones I have available to me.

   
Numerical Receipes in Fortran 90, page 964 and 985-986, describes the 
syntax of Fortran 90 and 95 as SIMD.

Peter Pacheco's book on MPI describes the difference between von Neumann 
machines and vector machines as analogous to the difference between 
Fortran77 and Fortran 90 (with an example from Fortran90 array slicing). 
He is ambigous as to whether vector machines really are SIMD, or more 
related to pipelined von Neumann machines.

Grama et al. Introduction to Parallel Computing describes SIMD as an 
architecture, but it is more or less clear that the mean hardware. 
They do say the Fortran 90 where statement is a primitive used to 
support selective execution on SIMD processors, as conditional execution 
(if statements) are detrimental to performance.

So at least we here have three books claiming that Fortran is a language 
with special primities for SIMD processors.


 That's a fair point, but unrelated to whether or not numpy can be
 labeled SIMD. These all refer to hardware.
   
Actually I don't think the distinction is that important as we are 
taking about Turing machines. Also, a lot of what we call hardware is 
actually implemented  as software on the chip: The most extreme example 
would be Transmeta, which completely software emulated x86 processors. 
The vague distinction between hardware and software is why we get 
patents on software in Europe, although pure software patents are 
prohibited. One can always argue that the program and the computer 
together constitutes a physical device; and circumventing patents by 
moving hardware into software should not be allowed. The distinction 
between hardware and software is not as clear as programmers tend to 
believe.

Another thing is that performance issues for vector machines and vector 
languages (Fortran 90, Matlab, NumPy) are similar. Precisely the same 
situations that makes NumPy and Matlab code slow are detrimental on 
SIMD/vector hardware. That would for example be long for loops with 
conditional if statements. On the other hand, vectorized operations over 
arrays, possibly using where/find masks, are fast. So although NumPy is 
not executed on a vector machine like the Cray C90, it certainly behaves 
like one performance wise.

I'd say that a MIMD machine running NumPy is a Turing machine emulating 
a SIMD/vector machine.

And now I am done with this stupid discussion...


Sturla Molden
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Sphinx/Numpydoc, attributes and property

2009-10-22 Thread Fabricio Silva
It seems that


class Profil(object):
def __init__(self):
  
  
  pass

def bla(self):
Blabla.
return 0

@property
def tdeb(self):
The time horizon startpoint.
return self.pts[0,:].min()

 and a foo.rst containing
:mod:`foo` -- BlaTitle
=

.. autoclass:: foo.Profil
   :members: bla, tdeb

produces a listing untitled Methods with methods bla and tdeb. Despite
tdeb is defined as a method, the decorator make tdeb be a property which
I would treat as an attribute and put it in the attribute list. That is
not what is done in sphinx/numpydoc. Who is to blame ? Sphinx or
NumpyDoc ?


-- 
Fabrice Silva
Laboratory of Mechanics and Acoustics (CNRS, UPR 7051)


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion