True to an extent, but you may have missed the authors point - by
coding complex bit twiddling routines, you often mess up the
optimizer in the JIT (and others), possibly preventing better
performance later (as the JIT continually improves). Worse, the code
is being changed in a very haphazard fashion. For example, the
commit stuff - there is already a deprecated interface
(IndexCommitPoint) in a brand new feature - this certainly smells of
poor planning/design to me... This wasn't changed for strictly
performance reasons, but I think you get the idea.
I am not against using more efficient data structures and algorithms
- just seems that many of these changes are micro benchmarked, and in
a real environment would have little benefit , yet they have been
added and the code changed for the worse (unless in the rare case the
new code is simpler).
It also seems that many more "obscure, index corruption" type bugs
have crept in as the pursuit of performance has taken place, whereas
the 1.9 and prior code was very stable.
If you could gain 10% performance, by making the code 50% more
complex, to run on a 1.5 JVM, when the existing code works 30% faster
on a 1.6 JVM (on the same hardware), do you do it? Many would argue
that "it depends on your user base demographics". I would counter
that NEVER is the correct choice, as eventually your users will move
to 1.6, so why live with the bad code. That you can plug replace at
runtime still doesn't solve the issue, as you have twice as much code
to continually debug and test.
On Jul 23, 2008, at 3:23 PM, Yonik Seeley wrote:
Making well reasoned arguments about specific patches would be
helpful.
Also, the complexity vs speed trade-offs are different for core
library like Lucene where performance is one of the primary features.
-Yonik
On Wed, Jul 23, 2008 at 4:01 PM, robert engels
<[EMAIL PROTECTED]> wrote:
I hope this doesn't offend anyone, but I think this is an
excellent article
that the Lucene development team might find helpful.
I have often been dismayed at complex code being written to achieve
"negligible" performance improvements. Most often, a micro
benchmark is used
to justify the change.
Worse, spending effort putting hacks into Lucene when they are
clearly JVM
bugs. I think it would be a far better use of resources to lobby
Sun/others
through the appropriate channels to get the underlying issue
corrected.
I know that there have been many performance improvements in
Lucene as of
late, but almost all of these have been algorithm changes - not
obscure bit
twiddling...
Anyway, it is at
http://java.sun.com/developer/technicalArticles/Interviews/
community/pepperdine_qa.html
Robert
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]