I do agree and whilst our dev environments are pretty close to the live 
ones, I dont have a particularly useful set of use cases which I can test 
with (well I do but they dont cause the problem to occur on our dev 
systems!).  Thus the only meaningful profiling would have to be done on 
the live system which clearly is less than desirable.

I also agree that 30 r/s isnt a huge load however when the site is running 
at about 12 as say it is at the moment, then this doesnt happen.  It is 
thus a problem associated with the high loads.  Also notable is a very 
large number of the following 3 types of error which occur in Tomcat and I 
believe are related to the JK2 connector (I have also tried the JK 
connector with similar results):

01-Aug-2003 17:48:01 org.apache.jk.common.ChannelSocket processConnection
INFO: Server has been restarted or reset this connection
01-Aug-2003 17:48:21 org.apache.jk.server.JkCoyoteHandler action
INFO: RESET 
01-Aug-2003 17:59:12 org.apache.jk.server.JkCoyoteHandler action
SEVERE: Error in action code 
java.net.SocketException: Connection reset by peer: socket write error
        at java.net.SocketOutputStream.socketWrite0(Native Method)
        at 
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
        at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
        at org.apache.jk.common.ChannelSocket.send(ChannelSocket.java:435)
        at 
org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:627)
        at 
org.apache.jk.server.JkCoyoteHandler.action(JkCoyoteHandler.java:372)
        at org.apache.coyote.Response.action(Response.java:222)
        at org.apache.coyote.Response.finish(Response.java:343)
        at 
org.apache.coyote.tomcat4.OutputBuffer.close(OutputBuffer.java:326)
        at 
org.apache.coyote.tomcat4.CoyoteResponse.finishResponse(CoyoteResponse.java:500)
        at 
org.apache.coyote.tomcat4.CoyoteAdapter.service(CoyoteAdapter.java:224)
        at 
org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:261)
        at 
org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:360)
        at 
org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:632)
        at 
org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:590)
        at 
org.apache.jk.common.SocketConnection.runIt(ChannelSocket.java:707)
        at 
org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:530)
        at java.lang.Thread.run(Thread.java:536)

Perhaps it is these which cause the problem?  I cant believe that there is 
some intrinsic problem with Tomcat as clearly lots of people use it 
successfully though not, perhaps, when mated to IIS.  I am currently 
considering switching to Jetty for a bit better performance which in this 
app is crucial.  Anyone done this successfully?

cheers
Pete





"Shapira, Yoav" <[EMAIL PROTECTED]>
01/08/2003 18:45
Please respond to "Tomcat Users List"
 
        To:     "Tomcat Users List" <[EMAIL PROTECTED]>
        cc: 
        Subject:        RE: Problems with Tomcat in a high load 
environment



Howdy,
Even if your loathe to do it, a profiler is invaluable as long as the
profiled environment (your dev/test env) is close enough to production
to be meaningful.  You should profile, and run stress tests, on hardware
and software that's as similar as possible to production, before going
live.

FWIW, 30 requests/sec is not an extremely high load: most of our apps
routinely handle several times that load on standalone tomcat 4.1.24
(and starting at 12:01 tonight, 4.1.27 ;))

Have you tried running without the IIS front-end?  It could be worth a
shot.  Tomcat can handle the static files as well.

Yoav Shapira
Millennium ChemInformatics


>-----Original Message-----
>From: [EMAIL PROTECTED]
>[mailto:[EMAIL PROTECTED]
>Sent: Friday, August 01, 2003 1:29 PM
>To: [EMAIL PROTECTED]
>Subject: Problems with Tomcat in a high load environment
>
>Hi,
>we are running Tomcat 4.1.18 on Windows 2000 under JDK1.4.1_01.  We are
>using IIS as a web server connecting using the ISAPI JK 2 connector.
We
>are currently experiencing 2 seperate problems when under high load (ie
>
>30 requests per second):
>
>1.  A huge number of log events (~30 sets per minute) are generated in
the
>NT event log which show problems with the ISAPI connector.  Each time
>there is a problem, 7 lines are created:
>
>Error: [jk_worker_ajp13.c (512)]: ajp13.service() Error  forwarding
>ajp13:localhost:8029 1 0
>Error: [jk_worker_ajp13.c (416)]: ajp13.service() ajpGetReply
recoverable
>error 3
>Error: [jk_handler_response.c (178)]: handler.response() Error sending
>response
>Error: [jk_service_iis.c (157)]: jk_ws_service_t::head,
>ServerSupportFunction failed
>Error: [jk_worker_ajp13.c (416)]: ajp13.service() ajpGetReply
recoverable
>error 3
>Error: [jk_handler_response.c (200)]: Error ajp_process_callback -
write
>failed
>Error: [jk_service_iis.c (247)]: jk_ws_service_t::write, WriteClient
>failed
>
>These I think lead to the eventual crash of IIS however I also suspect
>that they may cause user errors though I havent actually seen any
evidence
>of this.  Does anyone know anything about these errors or what I can do
to
>reduce them?  Could it be a tuning of Tomcat issue?
>
>2. The server is set up currently with -Xms512m and the same for Xmx.
>This, I would have thought, was OK for this application, though of
course
>I could be wrong, but is in any case irrelevant to the problem as if I
>increase memory to 1Gb it makes no odds.
>The server will run fine for a few minutes at about 128m of memory
usage.
>At some (slightly random point with no obvious trigger), it will in a
>matter of seconds use up all available memory, thus triggering a huge
rise
>in the processor usage which sits at roughly 50-60% (across 2
processors)
>constantly.  It is worth noting that 1 processor is not maxed out - the
>load is relatively evenly distributed.
>After a further while, the young generation space runs out of memory
and a
>process ensues of the processor load "bouncing" up to 100% and back to
60%
>over and over again as it GCs, reduces the momery used to ~ 15m less
than
>it was, then it is used up again and so on.  This obviously seriously
>impacts the usage of the application.  I cant see why it is doing it;
this
>is an ecommerce application and the loads are not that high - clearly a
>leak of some description is occuring somewhere but the speed with which
>these changes happen baffle me, and I dont thik there is much setup
work I
>can do to change it.  I could install a profiler to find out whats
going
>on but it is a live system and I am loath to do so.
>Finally it is worth noting that I have only just made this system live
and
>a functionally identical although architecturally simpler version ran
>quite happily under the same loads using no more than 256m of ram under
>JRun 3.1.
>
>Any ideas?!
>
>cheers
>Pete
>
>Kiss Technologies
>
>http://www.kisstechnologies.co.uk/
>
>4, Percy Street
>London
>W1T 1DF
>
>Phone numbers:
>
>Phone 020 7692 9922
>Fax 020 7692 9923



This e-mail, including any attachments, is a confidential business 
communication, and may contain information that is confidential, 
proprietary and/or privileged.  This e-mail is intended only for the 
individual(s) to whom it is addressed, and may not be saved, copied, 
printed, disclosed or used by anyone else.  If you are not the(an) 
intended recipient, please immediately delete this e-mail from your 
computer system and notify the sender.  Thank you.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Reply via email to