Oh yes I agree with you completely. I was really referring to how benchmarks are being used as marketing tools and published to discredit other projects. Also I believe that there are jewels at java.net as well. And you read me right: I'm no fan of SUN nor it's "open source" efforts.
<OT> Back in the day when Bill Joy and Scott McNealy were at the helm I had a profound sense of respect for SUN. I actually wanted to become an engineer there. Now, IMO, they're a completely different beast driven by marketing rather than engineering principals. I feel they resort to base practices that show a different character than the noble SUN I was used to. It's sad to know that the SUN many of us respected and looked up to has long since died. </OT> Regarding benchmarks they are great for internal metrics and shedding light on differences in architecture that could produce more efficient software. I'm a big fan of competing against our own releases - meaning benchmarking a baseline and looking at the performance progression of the software as it evolves with time. Also testing other frameworks is good for just showing how different scenarios are handled better with different architectures: I agree that we can learn a lot from these tests. I just don't want to use metrics to put down other projects. It's all about how you use the metrics which I think was my intent on the last post. This perhaps is why I am a bit disgusted with these tactics which are not in line with open source etiquette but rather the mark of commercially driven and marketing oriented OSS efforts. Alex On 5/24/07, Adam Fisk <[EMAIL PROTECTED]> wrote:
I agree on the tendency to manipulate benchmarks, but that doesn't mean benchmarks aren't a useful tool. How else can we evaluate performance? I guess I'm most curious about what the two projects might be able to learn from each other. I would suspect MINA's APIs are significantly easier to use than Grizzly's, for example, and it wouldn't surprise me at all if Sun's benchmarks were somewhat accurate. I hate Sun's java.net projects as much as the next guy, but that doesn't mean there's not an occasional jewel in there. It would at least be worth running independent tests. If the differences are even close to the claims, it would make a ton of sense to just copy their ideas. No need for too much pride on either side! Just seems like they've put a ton of work into rigorously analyzing the performance tradeoffs of different design decisions, and it might make sense to take advantage of that. If their benchmarks are off and MINA performs better, then they should go ahead and copy MINA. That's all assuming the performance tweaks don't make the existing APIs unworkable. -Adam On 5/24/07, Alex Karasulu <[EMAIL PROTECTED]> wrote: > > On 5/24/07, Mladen Turk <[EMAIL PROTECTED]> wrote: > > > > Adam Fisk wrote: > > > The slides were just posted from this Java One session claiming > Grizzly > > > blows MINA away performance-wise, and I'm just curious as to people's > > views > > > on it. They present some interesting ideas about optimizing selector > > > threading and ByteBuffer use. > > > > > > > > > http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-2992&yr=2007&track=5 > > > > > > > I love the slide 20! > > JFA finally admitted that Tomcat's APR-NIO is faster then JDK one ;) > > However last time I did benchmarks it was much faster then 10%. > > > > > > > > Maybe someone could comment on the performance improvements in MINA > > > 2.0? > > > > He probably compared MINA's Serial IO, and that is not usable > > for production (jet). I wonder how it would look with real > > async http server. > > Nevertheless, benchmarks are like assholes. Everyone has one. > > > Exactly! > > Incidentally SUN has been trying to attack several projects via the > performance angle for > some time now. Just recently I received a cease and desist letter from > them > when I > compiled some performance metrics. The point behind it is was that we > were > not correctly > configuring their products. I guess they just want to make sure things > are > setup to their > advantage. That's what all these metrics revolve around and if you ask me > they're not worth > a damn. There is a million ways to make one product perform better than > another depending > on configuration, environment and the application. However is raw > performance metrics as > important as a good flexible design? > > Alex >
