On Apr 8, 2008, at 9:47 AM, Dag Sverre Seljebotn wrote:
> +1 for polishing it and provide option c) as a plugin for now and see
> how it goes, and discuss inclusion in main Cython after it has proven
> itself.
>> I'll have to take a closer look at your proposal and compare it a  
>> bit more
>> to the other approaches we had so far (especially Dag's work),  
>> before I
>> make up my mind about it. Maybe others can already comment a bit  
>> deeper on
>> this.
>>
> Since you bring up my name:
>
> a) Clean NumPy integration (that is, with only a pxd file, not a full
> NumPy plugin) needs some kind of metaprogramming support, but can  
> either
> work with Martin's explicit approach or my implicit approach, doesn't
> matter much. (The plan is to not use meta-programming at first, but  
> that
> will be slow and metaprogramming is key to getting full NumPy speed).

There will be a little bit of metaprogramming required for NumPy  
support (e.g. to get the type declarations right) but I think the  
crucial piece to make things run efficiently and smoothly is  
extensive compile-time evaluation of expressions. To be very specific  
about Numpy, the array has the format

     ctypedef extern class numpy.ndarray [object PyArrayObject]:
          cdef char *data
          cdef int nd
          cdef npy_intp *dimensions
          cdef npy_intp *strides
          cdef object base
          cdef dtype descr
          cdef int flags

To access an element (i.e. __getitem__) of an ndarray A on does

<A.descr.type>A.data[sizeof(A.descr.type) * A.strides[0] * ix]

(well, the actual code is a bit more complicated than this, using  
index2ptr and all). In any case, the point is that if we have compile- 
time information about A then this can be simplified to a single  
array lookup with nothing more than compile-time-evaluation (and a  
little compile-time type analysis). In this case the type parameters  
are exactly the instance member fields. If they are not known at  
compile time then the code produced is the same (though it won't be  
as completely evaluated). I'll admit I'm waiving my hands a bit as to  
how to handle the actual types themselves, but I think it could be  
done in a similar manner.

This is much weaker than full metaprogramming, but is easy to  
understand, implement, and read (especially compared to trying to  
implement array indexing as a series of tree transformations).

> b) About my work in relation to this, see the uneval page:
>
> http://wiki.cython.org/enhancements/uneval
>
> If Martin's work is accepted now, and my own approach for
> meta-programming is ever done later, then uneval provides a very  
> natural
> bridge between them. The two seems to be very complementary.  
> Martin's is
> "explicit" and simple but for advanced users, mine is "easy-to-use"  
> for
> beginners but more difficult to really understand for advanced  
> users. So
> doing Martin's first, and then see if my more complicated approach is
> really needed should be fine as long as uneval provides a natural
> transition path.
>
> uneval() would return the same kind of tree that Martin allows work  
> on,
> whatever that tree ends up being (as I understand it the exact syntax
> used is an example, one should add a small API layer on top to isolate
> it more from Cython core).

The uneval idea is a very interesting one, and certainly has a very  
pythonic feel to it. One thing I don't like is all of these are very  
closely coupled to the actual Cython parse tree--you are right in  
that there should be some abstraction. There is also the question of  
when the transformations get done, as I'd imagine some of them would  
be type-dependent.

- Robert

_______________________________________________
Cython-dev mailing list
[email protected]
http://codespeak.net/mailman/listinfo/cython-dev

Reply via email to