Hmnn....I don't think you're reading the benchmarks correctly.  Slide 19
shows an improvement of over 50% with Grizzly.

I think the MINA coders should feel very proud too.  I love the framework
and have no plans to stop using it.

-Adam


On 5/24/07, John Preston <[EMAIL PROTECTED]> wrote:

I think that the MINA coders should feel very proud. If I read the
benchmarks correct then we are talking about 10% difference and that's
within the margin of error of almost anything. Considering the issues
mentioned previously about tuning for HTTP probably MINA and Grizzly
are equals, at a fraction of the cost.

John

On 5/24/07, Adam Fisk <[EMAIL PROTECTED]> wrote:
> I agree on the tendency to manipulate benchmarks, but that doesn't mean
> benchmarks aren't a useful tool.  How else can we evaluate
performance?  I
> guess I'm most curious about what the two projects might be able to
learn
> from each other.  I would suspect MINA's APIs are significantly easier
to
> use than Grizzly's, for example, and it wouldn't surprise me at all if
Sun's
> benchmarks were somewhat accurate.  I hate Sun's java.net projects as
much
> as the next guy, but that doesn't mean there's not an occasional jewel
in
> there.
>
> It would at least be worth running independent tests.  If the
differences
> are even close to the claims, it would make a ton of sense to just copy
> their ideas.  No need for too much pride on either side!  Just seems
like
> they've put a ton of work into rigorously analyzing the performance
> tradeoffs of different design decisions, and it might make sense to take
> advantage of that.  If their benchmarks are off and MINA performs
better,
> then they should go ahead and copy MINA.
>
> That's all assuming the performance tweaks don't make the existing APIs
> unworkable.
>
> -Adam
>
>
> On 5/24/07, Alex Karasulu <[EMAIL PROTECTED]> wrote:
> >
> > On 5/24/07, Mladen Turk <[EMAIL PROTECTED]> wrote:
> > >
> > > Adam Fisk wrote:
> > > > The slides were just posted from this Java One session claiming
> > Grizzly
> > > > blows MINA away performance-wise, and I'm just curious as to
people's
> > > views
> > > > on it.  They present some interesting ideas about optimizing
selector
> > > > threading and ByteBuffer use.
> > > >
> > > >
> > >
> >
http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-2992&yr=2007&track=5
> > > >
> > >
> > > I love the slide 20!
> > > JFA finally admitted that Tomcat's APR-NIO is faster then JDK one ;)
> > > However last time I did benchmarks it was much faster then 10%.
> > >
> > > >
> > > > Maybe someone could comment on the performance improvements in
MINA
> > > > 2.0?
> > >
> > > He probably compared MINA's Serial IO, and that is not usable
> > > for production (jet). I wonder how it would look with real
> > > async http server.
> > > Nevertheless, benchmarks are like assholes. Everyone has one.
> >
> >
> > Exactly!
> >
> > Incidentally SUN has been trying to attack several projects via the
> > performance angle for
> > some time now.  Just recently I received a cease and desist letter
from
> > them
> > when I
> > compiled some performance metrics.  The point behind it is was that we
> > were
> > not correctly
> > configuring their products.  I guess they just want to make sure
things
> > are
> > setup to their
> > advantage.  That's what all these metrics revolve around and if you
ask me
> > they're not worth
> > a damn.  There is a million ways to make one product perform better
than
> > another depending
> > on configuration, environment and the application.  However is raw
> > performance metrics as
> > important as a good flexible design?
> >
> > Alex
> >
>

Reply via email to