Well, I'm sure that's true too. But when the people who create
good maintainable code find a need to optimize it, usually
maintainability is reduced. Example: a Mandlebrot program in RB
was first written using a nice neat complex number class.
Profiling and optimization led to eliminating that class and
inlining all the calculations, resulting in a dramatic speedup but
more complex, less general, and harder-to-maintain code.
I remember working on that Mandelbrot and speeding it up.
I remember eliminating a lot of code, by storing values instead of
repeating calculations. So I reduced the code size and sped it up :)
You say that "we all write simple code at first", but even my
experience of RS projects shows this not to be true. You do write a
lot of invariant code, code which is basically doing something
already done on a previous line. By storing intermediates you'd get
faster speeds.
I don't deny that a nice neat complex number class looks simpler.
It's just that I wouldn't do it in RB due to object management
overhead. A decision so instinctive I normally wouldn't even consider
it necessary to explain or justify, I'd just do it. I just have a
feel for which things are fast and slow, usually anyhow.
In fact, I'd even be very wary of doing it with classes in C++ even
if every method was inlined and we didn't have memory allocation/
deallocation overhead (stack based allocation). You can't trust the
compiler to optimise things for you properly. Well, unless you know
your compiler well.
If you've seen some good tests of your favourite c++ compiler and it
proves to handle things like complex numbers as a class, just as fast
as doing it with two floats, then it's OK to use it as a class.
--
http://elfdata.com/plugin/
_______________________________________________
Unsubscribe or switch delivery mode:
<http://www.realsoftware.com/support/listmanager/>
Search the archives of this list here:
<http://support.realsoftware.com/listarchives/lists.html>