On Sat, May 14, 2011 at 9:43 AM, Vinzent Steinberg
<[email protected]> wrote:
> On 13 Mai, 01:54, SherjilOzair <[email protected]> wrote:
>> Did I leave anything Vinzent ?
>
> I think we should also talk about the future of the matrix module (and
> a possible refactoring), so that we have a clear vision of the design
> when the official coding starts. For example, I'm not really happy
> with the current code (everything in one file in one class) and the
> current interface (A.LUsolve(b) for example).
>
> Ronan wrote:
>> By turning naturally OO code into procedural, you lose a lot in readability
>> and developer productivity.
>
> Well, under the hood (on the lowest level) you have some (specialized)
> algorithm that operates on some data structure. I fail to see how this
> is naturally OO. About abstracting between the different types of data
> representation I agree, you have a point. But isn't this compatible
> with such a model? You can have your OO approach on a higher level, on
> the lowest level you have specialized routines (you called it "private
> helpers and specialised manipulation methods") that cannot be
> abstract, thus defeating the advantage of OO IMHO. Do you think it
> would be better to have them inside the class rather than separated?
>
>> If that wasn't the case, we'd all be using
>> only C - after all, it can do everything Python does, can't it?
>
> You could as well say that BF does everything that Python does.
>
>> Making high-level decisions based on micro-optimisation concerns is very
>> much "premature optimisation", I think.
>
> I think it is not only about micro-optimization, but also about a
> clear code separation. But let's hear more opinions about this.

Micro-optimization can be a worthy goal for the lowest level code that
gets called a lot (so called "optimizing the inner loop").  For
example, I rewrite the very low-level dmp_zero_p() function in the
polys so that it was non-recursive.  Each call of this function even
for a rather complicated object took only about 23 us, and my
non-recursive version took 14 us.  But this function was being called
thousands of times in upper level function calls, and as a result, the
fix shaved a whole second off the time needed to evaluate
integrate(x**2*exp(x)*sin(x), x)
(see the commit message of 4cb7aa93601c16d26767858175d7b1b4c04e9fca).

>
>> The lowest level is the implementation, not the interface. The
>> interface, even if it's low-level, should be designed independently.
>
> What do you mean? The different parts of the implementation obviously
> have to interact with each other.
>
>> I don't think lessons learned from writing kernel code in C apply
>> directly to high-level Python code.
>
> (IIRC his point was more general than only about kernel development.)
>
>> If you use a JIT (pypy) or static compilation (Cython), it shouldn't
>> make any difference. If you factor in better code quality and
>> productivity, OO code will actually be faster.
>
> You assume that OO code is always better than procedural code. I don't
> think everyone agrees.
>
> Vinzent
>

Yeah. I think PyPy is actually designed to optimize procedural code.
SymPy actually runs slower in PyPy than in CPython because it has so
many difficult to optimize things (dynamical things, some of which are
inherently OO, like getattr calls).

Aaron Meurer

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.

Reply via email to