Re: Proposal draft for Tomcat 5.0 : benchmarks

2002-06-24 Thread costinm


Ok, Christopher - we're not stupid engineers in need for
an intro course on performance. 

I think it's obvious that tomcat3.3 has 'improved performance'
over 3.2, and 4.1 has 'improved performance' over 4.0.
I don't think anyone can argue with that, and we don't 
have any test suite to prove it. 

It is impossible ( IMHO ) to put some real number on 
that ( or on 'cleaner code' or 'better community' ).

There are obvioudly a number of numbers we can measure 
- like overhead per request or jsp, startup time, how
long it takes to run the full watchdog, how much it
takes to do a forward. And I can assure you that everyone
working on performance seriously is running those test
and evaluating the performance periodically. 

And we're not doing blind optimizations here - Coyote
is already there and shows what it can do.

Please stop this line of arguments - I personally feel
you treat me like a stupid who doesn't know anything
about that and has to be reminded of the basics.


Costin



On Sun, 23 Jun 2002, Christopher K.  St.  John wrote:

 
  Good proposal goals provide a way to test if the goal
 has been achieved and a way to argue if the goal is
 worthwhile. Improve performance as a goal is both
 untestable and impossible to argue with, so it's a
 badly stated goal. 
 
 
 Remy Maucherat wrote:
 
  To evaluate code, I strongly recommend using a profiling
  tool instead of benchmarks, as it also helps finding places
  where your code is inefficient.
 
 
  Profiling tools and internal benchmarks answer different
 questions. You don't use a profiling tool instead of a
 benchmark, you use it with a benchmark.
 
  Talking about performance improvements without also
 talking about workload is pointless. Setting up some
 standard internal workloads has several advantages:
 
  #1) If forces you to consider what's important and
  what's not. In that sense, it's a very precise
  communication tool as much as anything else.
 
  It provides a way to nail down your goals. Saying
  that our goal is to improve performance is like
  saying our goal is to make the software better.
  Sure, ok, who could vote against that?
 
  Saying something like we need a 50% improvement
  in request processing time for servlets that produce
  large (50k) pages provides a testable goal to
  shoot for (and an opportunity for other developers
  to say wait, that's not a good goal to shoot for)
 
  #2) It provides a common framework. If you claim a
  25% performance improvement, you've told me
  nothing. If you claim a 25% improvement on the
  helloworld servlet performance test, then I
  know what you mean and can try it out for 
  myself. It's very easy to fool yourself, and a
  standard workload acts as a BS detector.
 
  #3) It saves time in the long run, since everyone
  doesn't come up with their own mini-internal
  performance test-suite. In fact, you generally
  start something like this by posting your personal
  performance  testing setup so others can use it,
  and it grows from there. It doesn't have to be
  completely formal, it just needs to be acknowledged
  as an issue.
 
 
 


--
To unsubscribe, e-mail:   mailto:[EMAIL PROTECTED]
For additional commands, e-mail: mailto:[EMAIL PROTECTED]




Re: Proposal draft for Tomcat 5.0 : benchmarks

2002-06-24 Thread Pier Fumagalli

Christopher K.  St.  John [EMAIL PROTECTED] wrote:

 Profiling tools and internal benchmarks answer different
 questions. You don't use a profiling tool instead of a
 benchmark, you use it with a benchmark.

+1 :)

Pier


--
To unsubscribe, e-mail:   mailto:[EMAIL PROTECTED]
For additional commands, e-mail: mailto:[EMAIL PROTECTED]




Re: Proposal draft for Tomcat 5.0 : benchmarks

2002-06-24 Thread Pier Fumagalli

[EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 I think it's obvious that tomcat3.3 has 'improved performance'
 over 3.2, and 4.1 has 'improved performance' over 4.0.
 I don't think anyone can argue with that, and we don't
 have any test suite to prove it.

Not _that_ obvious... Do we have _NUMBERS_ ??? And not just numbers in terms
of reqs/sec, but also in comparison to a long-run test with % of CPU time
used, and IO usage (uptime)...

 It is impossible ( IMHO ) to put some real number on
 that ( or on 'cleaner code' or 'better community' ).

How about numbers in terms of reqs/sec/uptime

Pier


--
To unsubscribe, e-mail:   mailto:[EMAIL PROTECTED]
For additional commands, e-mail: mailto:[EMAIL PROTECTED]




RE: Proposal draft for Tomcat 5.0 : benchmarks

2002-06-24 Thread GOMEZ Henri

Not _that_ obvious... Do we have _NUMBERS_ ??? And not just 
numbers in terms
of reqs/sec, but also in comparison to a long-run test with % 
of CPU time
used, and IO usage (uptime)...

 It is impossible ( IMHO ) to put some real number on
 that ( or on 'cleaner code' or 'better community' ).

How about numbers in terms of reqs/sec/uptime

Take a look at my previous post on tomcat-dev about mod_jk 1.2.0
issue and release.

I make extensive tests on Tomcat 3.3.1/4.0.4, http connectors
and mod_jk 1.2.0/Apache 1.3/2.0.

BTW, I could say that I launch nigthly tests involving 10 millions
calls to HelloWorldExample servlets on both Tomcats and never got
a single error.

So I think that both Tomcats should be considered stable.

Scalability could be handled by advanced load-balancing software,
on WebServer or in a proxy relay.

Such an advanced lb software, should handle firewall case, 
recovery (session datas could be SPRAYED ;), requests number
by tomcats (and not just power indice of tomcats), 

The only problem today is speedeven if they are still
slower (by a 2 or 3 times factor) than some COMMERCIAL projects
like resin.


--
To unsubscribe, e-mail:   mailto:[EMAIL PROTECTED]
For additional commands, e-mail: mailto:[EMAIL PROTECTED]




Re: Proposal draft for Tomcat 5.0 : benchmarks

2002-06-24 Thread Pier Fumagalli

GOMEZ Henri [EMAIL PROTECTED] wrote:

 I make extensive tests on Tomcat 3.3.1/4.0.4, http connectors
 and mod_jk 1.2.0/Apache 1.3/2.0.
 
 BTW, I could say that I launch nigthly tests involving 10 millions
 calls to HelloWorldExample servlets on both Tomcats and never got
 a single error.
 
 So I think that both Tomcats should be considered stable.

My officemates thank you for the 5 minutes of hysterical laughter you gave
to us in this bright radiant sunny morning in London... Much appreciated..

If your site is built up by HelloWorldExample servlets, then, ok, I'm
going to shut up about TC's reliability in production environments..

(note: http://www.vnunet.com/ not one of the pages you're going to click
over there is static...)

Pier


--
To unsubscribe, e-mail:   mailto:[EMAIL PROTECTED]
For additional commands, e-mail: mailto:[EMAIL PROTECTED]




RE: Proposal draft for Tomcat 5.0 : benchmarks

2002-06-24 Thread GOMEZ Henri

 So I think that both Tomcats should be considered stable.

My officemates thank you for the 5 minutes of hysterical 
laughter you gave
to us in this bright radiant sunny morning in London... Much 
appreciated..

It was a pleasure ;)

If your site is built up by HelloWorldExample servlets, then, ok, I'm
going to shut up about TC's reliability in production environments..

Ok, I must admin, we're also using snoop.jsp and dates.jsp ;)

(note: http://www.vnunet.com/ not one of the pages you're 
going to click
over there is static...)

More seriously our webapps does much more things that 
just 'HelloWorld'.

We're using Tomcat 3.3.1 on about four OS/400, and six Linux 
servers and never got any problems with them.

Time spend in tomcat is about 10% of the overall application
time which deal with OS/400 specifics stuffs and SQL backends.

Yes, in our production site, Tomcat is stable, and we've got to
make extensive tests to prove stability and give os load indices
before production staff allow the use of tomcat on our boxes.


--
To unsubscribe, e-mail:   mailto:[EMAIL PROTECTED]
For additional commands, e-mail: mailto:[EMAIL PROTECTED]




Re: Proposal draft for Tomcat 5.0 : benchmarks

2002-06-24 Thread Pier Fumagalli

GOMEZ Henri [EMAIL PROTECTED] wrote:

 If your site is built up by HelloWorldExample servlets, then, ok, I'm
 going to shut up about TC's reliability in production environments..
 
 Ok, I must admin, we're also using snoop.jsp and dates.jsp ;)

That's so great...

 (note: http://www.vnunet.com/ not one of the pages you're
 going to click
 over there is static...)
 
 More seriously our webapps does much more things that
 just 'HelloWorld'.
 
 We're using Tomcat 3.3.1 on about four OS/400, and six Linux
 servers and never got any problems with them.
 
 Time spend in tomcat is about 10% of the overall application
 time which deal with OS/400 specifics stuffs and SQL backends.
 
 Yes, in our production site, Tomcat is stable, and we've got to
 make extensive tests to prove stability and give os load indices
 before production staff allow the use of tomcat on our boxes.

Ok... Now you said (correct me if I'm wrong), that your site has 5000/1
servlets request per day, right?

And now you say that to handle that load, you're using FOUR OS/400 systems
and SIX Linux systems...

Our site runs on one machine... A two way UltraSparc-II at 440 Mhz, with two
gigs o' ram... And we do some 1000 times more requests... Mindblowing...

I just found out that we (my work buddies and me, here @ the office) are not
software engineers, but magicians...

Pier (habra cadabra alacazam... pu)


--
To unsubscribe, e-mail:   mailto:[EMAIL PROTECTED]
For additional commands, e-mail: mailto:[EMAIL PROTECTED]




RE: Proposal draft for Tomcat 5.0 : benchmarks

2002-06-24 Thread GOMEZ Henri

Ok... Now you said (correct me if I'm wrong), that your site 
has 5000/1
servlets request per day, right?

My setup is for more than ONE clients, many clients, many JVM,
many tomcats, but with an average of 5000/1 reqs by client 
site by days.

And now you say that to handle that load, you're using FOUR 
OS/400 systems
and SIX Linux systems...

I said, that I use many Tomcats on many differents systems, 
without any problems. 

And you know what, I could even use mod_jk load-balancing 
support to have scalabity.

Our site runs on one machine... A two way UltraSparc-II at 440 
Mhz, with two
gigs o' ram... And we do some 1000 times more requests... 
Mindblowing...

That's clearly depends on the web application you have,
and our application does many things in both servlet/jsp/xsl 
and OS/400 (couldn't tell you anything more without apache sign an
NDA).


--
To unsubscribe, e-mail:   mailto:[EMAIL PROTECTED]
For additional commands, e-mail: mailto:[EMAIL PROTECTED]




Re: Proposal draft for Tomcat 5.0 : benchmarks

2002-06-24 Thread PSA

I'm sorry for jumping in, but I couldn't help throwing in some 
performance notes from one of our own servers --

Ultra10, Solaris 7, 512 Meg RAM, running Tomcat 3.3.1 with Apache 1.3.x 
with ssl and a huge mod_rewrite section.  Also running a MySQL database 
and a RealServer with 60 simultaneous live streams (always maxed out).

We handle about 120,000 requests per day to the servlet engine and a 
little over half of those involve a minimum of two queries to the MySQL 
server, two filesystem accesses, and a Xalan/Xerces XSLT transformation.

Almost all servlet traffic occurs during business hours, and the load on 
the machine rarely goes over 0.15.

We're moving up to a 280R and moving off the RealServer.  Our load 
testing indicates that we should be able to handle one million servlet 
requests per day, via SSL, complete with XSLT on each request as well as 
database accesses, without much trouble on the new box.

Performance has definately not been a problem.

-Paul


--
To unsubscribe, e-mail:   mailto:[EMAIL PROTECTED]
For additional commands, e-mail: mailto:[EMAIL PROTECTED]




Re: Proposal draft for Tomcat 5.0 : benchmarks

2002-06-23 Thread Remy Maucherat

Christopher K. St. John wrote:
 Bill Barker wrote:
 
I agree with Remy
that any single benchmark suite isn't going to tell you how your particular
web-app will perform.

 
 
  Bill,
 
  Actually, I agree, but this isn't what that is :-) 
 
  There's a difference between:
 
  a) Benchmarks for users that compare the performance of
 servers against one another to help with buying
 decisions. These are hard to write, hard to interpret,
 and generally not very useful in predicting the 
 performance of a particular user's web apps.
 
  b) Benchmarks for developers that help answer the 
 (suprisingly difficult) question did my so called
 performance enhancement help or hurt? and have
 we met our performance goals. They also help set
 up a framework for talking about performance. For
 example, using the Apache ab utility is a quick and
 easy way to get a very skewed performance estimate.
 If there's something written down, everyone has 
 some clue that there's a skew, and some idea about
 what the skew is.
 
  (b) is a much easier than than (a), because the point
 is to nail down the vocabulary and set up a common framework
 for measuring progress, not to develop some ultimate
 web-app performance benchmark for users. It's a purely
 internal thing. 
 
  This isn't something I just made up :-) It's sort of
 software engineering conventional wisdom that internal
 benchmarks are necessary if you're going to make performance
 a development goal, and I've used them to good effect on
 several commercial projects.

Strong -1. It is outside the scope of this project to define and develop 
such a benchmark. There are similar things which already exist on the 
market (TPC-W). If you really want to do it, please start something in 
the commons.

To evaluate code, I strongly recommend using a profiling tool instead of 
benchmarks, as it also helps finding places where your code is inefficient.

Remy


--
To unsubscribe, e-mail:   mailto:[EMAIL PROTECTED]
For additional commands, e-mail: mailto:[EMAIL PROTECTED]




Re: Proposal draft for Tomcat 5.0 : benchmarks

2002-06-23 Thread Christopher K. St. John


 Good proposal goals provide a way to test if the goal
has been achieved and a way to argue if the goal is
worthwhile. Improve performance as a goal is both
untestable and impossible to argue with, so it's a
badly stated goal. 


Remy Maucherat wrote:

 To evaluate code, I strongly recommend using a profiling
 tool instead of benchmarks, as it also helps finding places
 where your code is inefficient.


 Profiling tools and internal benchmarks answer different
questions. You don't use a profiling tool instead of a
benchmark, you use it with a benchmark.

 Talking about performance improvements without also
talking about workload is pointless. Setting up some
standard internal workloads has several advantages:

 #1) If forces you to consider what's important and
 what's not. In that sense, it's a very precise
 communication tool as much as anything else.

 It provides a way to nail down your goals. Saying
 that our goal is to improve performance is like
 saying our goal is to make the software better.
 Sure, ok, who could vote against that?

 Saying something like we need a 50% improvement
 in request processing time for servlets that produce
 large (50k) pages provides a testable goal to
 shoot for (and an opportunity for other developers
 to say wait, that's not a good goal to shoot for)

 #2) It provides a common framework. If you claim a
 25% performance improvement, you've told me
 nothing. If you claim a 25% improvement on the
 helloworld servlet performance test, then I
 know what you mean and can try it out for 
 myself. It's very easy to fool yourself, and a
 standard workload acts as a BS detector.

 #3) It saves time in the long run, since everyone
 doesn't come up with their own mini-internal
 performance test-suite. In fact, you generally
 start something like this by posting your personal
 performance  testing setup so others can use it,
 and it grows from there. It doesn't have to be
 completely formal, it just needs to be acknowledged
 as an issue.


-- 
Christopher St. John [EMAIL PROTECTED]
DistribuTopia http://www.distributopia.com

--
To unsubscribe, e-mail:   mailto:[EMAIL PROTECTED]
For additional commands, e-mail: mailto:[EMAIL PROTECTED]




Re: Proposal draft for Tomcat 5.0 : benchmarks

2002-06-22 Thread Christopher K. St. John

Bill Barker wrote:
 
 I agree with Remy
 that any single benchmark suite isn't going to tell you how your particular
 web-app will perform.
 

 Bill,

 Actually, I agree, but this isn't what that is :-) 

 There's a difference between:

 a) Benchmarks for users that compare the performance of
servers against one another to help with buying
decisions. These are hard to write, hard to interpret,
and generally not very useful in predicting the 
performance of a particular user's web apps.

 b) Benchmarks for developers that help answer the 
(suprisingly difficult) question did my so called
performance enhancement help or hurt? and have
we met our performance goals. They also help set
up a framework for talking about performance. For
example, using the Apache ab utility is a quick and
easy way to get a very skewed performance estimate.
If there's something written down, everyone has 
some clue that there's a skew, and some idea about
what the skew is.

 (b) is a much easier than than (a), because the point
is to nail down the vocabulary and set up a common framework
for measuring progress, not to develop some ultimate
web-app performance benchmark for users. It's a purely
internal thing. 

 This isn't something I just made up :-) It's sort of
software engineering conventional wisdom that internal
benchmarks are necessary if you're going to make performance
a development goal, and I've used them to good effect on
several commercial projects.

-- 
Christopher St. John [EMAIL PROTECTED]
DistribuTopia http://www.distributopia.com

--
To unsubscribe, e-mail:   mailto:[EMAIL PROTECTED]
For additional commands, e-mail: mailto:[EMAIL PROTECTED]




Re: Proposal draft for Tomcat 5.0 : benchmarks

2002-06-22 Thread Bill Barker


- Original Message -
From: Christopher K. St. John [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, June 22, 2002 6:36 AM
Subject: Re: Proposal draft for Tomcat 5.0 : benchmarks


 Bill Barker wrote:
 
  I agree with Remy
  that any single benchmark suite isn't going to tell you how your
particular
  web-app will perform.
 

  Bill,

  Actually, I agree, but this isn't what that is :-)

  There's a difference between:

  a) Benchmarks for users that compare the performance of
 servers against one another to help with buying
 decisions. These are hard to write, hard to interpret,
 and generally not very useful in predicting the
 performance of a particular user's web apps.

  b) Benchmarks for developers that help answer the
 (suprisingly difficult) question did my so called
 performance enhancement help or hurt? and have
 we met our performance goals. They also help set
 up a framework for talking about performance. For
 example, using the Apache ab utility is a quick and
 easy way to get a very skewed performance estimate.
 If there's something written down, everyone has
 some clue that there's a skew, and some idea about
 what the skew is.

  (b) is a much easier than than (a), because the point
 is to nail down the vocabulary and set up a common framework
 for measuring progress, not to develop some ultimate
 web-app performance benchmark for users. It's a purely
 internal thing.

  This isn't something I just made up :-) It's sort of
 software engineering conventional wisdom that internal
 benchmarks are necessary if you're going to make performance
 a development goal, and I've used them to good effect on
 several commercial projects.


I stand by my +0 (:= It's a good idea, but I don't expect to personally
devote much of my time to it).

If you decided to make Vincent very happy by basing the suite on Cactus, you
might even be able to draw some of the Cactus people into the project.

 --
 Christopher St. John [EMAIL PROTECTED]
 DistribuTopia http://www.distributopia.com

 --
 To unsubscribe, e-mail:
mailto:[EMAIL PROTECTED]
 For additional commands, e-mail:
mailto:[EMAIL PROTECTED]



--
To unsubscribe, e-mail:   mailto:[EMAIL PROTECTED]
For additional commands, e-mail: mailto:[EMAIL PROTECTED]




Re: Proposal draft for Tomcat 5.0 : benchmarks

2002-06-22 Thread Pier Fumagalli

Bill Barker [EMAIL PROTECTED] wrote:

 I agree with Remy that any single benchmark suite isn't going to tell you how
 your particular web-app will perform.

For sure in a web-app there are way too many variables to consider... The
only thing, though, that we have to measure is not that difficult...

It's called overhead...

For instance, what is the overhead that a zero-length response takes to be
completed? What is the overhead of sending N-kbytes of data to the client?
What is the overhead of the Request Dispatcher?

Those are the points where, in terms of performance, we should concentrate
our efforts, all the rest is pure speculation.

Pier

--
[Perl] combines all the worst aspects of C and Lisp:  a billion of different
sublanguages in  one monolithic executable.  It combines the power of C with
the readability of PostScript. [Jamie Zawinski - DNA Lounge - San Francisco]


--
To unsubscribe, e-mail:   mailto:[EMAIL PROTECTED]
For additional commands, e-mail: mailto:[EMAIL PROTECTED]