Stefan Behnel skrev:
> The page mentions things like dynamic memory allocation happening
> automatically behind the scenes, so that
>
> cdef int a[runtime_size]
>
> would locally allocate memory, as would subsequent slicing, I guess.
This by the way, can be deferred to the C or C++ compiler:
C99: int a[runtime_size];
C++: std::vector<int> dummy(runtime_size);
int *const a = &dummy[0];
For ANSI C one can use the alloca function available for most compilers:
C89: int *const a = (int*)alloca(runtime_size*sizeof(int));
> Since we already agreed to get a full-featured SIMD type, what do you think
> about dropping the dynamic memory handling part for plain C arrays, and
> instead just supporting slicing operations on any C pointer type, and
> letting them return a Python list? (or a byte string in the special case of
> a char*)
>
I have programmed a lot in Fortran 95, where this kind of slicing is
available. I think slicing a pointer (if it can be done fast without
Python overhead) has its merits. But I think returning Python lists are
generally too expensive. I'd rather have a pointer slicing return a
Py_buffer struct referencing the memory. If it is assigned to a list
variable, then you get a list contructed. But if it is assigned to
another sliced pointer or buffer, it should not be more expensive than a
memcpy. It all depends on a full featured SIMD type.
Sturla Molden
_______________________________________________
Cython-dev mailing list
[email protected]
http://codespeak.net/mailman/listinfo/cython-dev