On Wed, Jan 16, 2013 at 9:34 AM, Kevin Priebe <ke...@realtyserver.com>wrote:

> Hi,
>
>
>
> We have a setup with Nginx load balancing between 2 clustered tomcat
> instances.  1 instance is on the same server as Nginx and the other is on a
> separate physical server (same rackspace).  We’re using pretty standard
> default settings and are using the NIO tomcat connector.  Tomcat version is
> 7.0.32 running on Debian.
>
>
>
> The problem is with the second tomcat instance where at random times will
> start showing SEVERE errors in the tomcat logs, which gets worse and worse
> until the instance is unusable and has to be restarted.  At first we
> thought it was related to high load, but once it happened early in the
> morning when load was fairly low.  It does seem to happen more often at
> high load times though, and is about once a day, sometimes twice.  AWSTATS
> says we get just over a million hits per day to the secondary tomcat
> instance.  Here’s the errors:
>
>
>
> Jan 15, 2013 11:22:21 AM org.apache.coyote.http11.AbstractHttp11Processor
> process
>
> SEVERE: Error processing request
>
> java.lang.NullPointerException
>
>
>
> Jan 15, 2013 11:22:21 AM org.apache.coyote.http11.AbstractHttp11Processor
> endRequest
>
> SEVERE: Error finishing response
>
> java.lang.NullPointerException
>
>         at
> org.apache.coyote.http11.InternalNioOutputBuffer.flushBuffer(InternalNioOutputBuffer.java:233)
>
>         at
> org.apache.coyote.http11.InternalNioOutputBuffer.endRequest(InternalNioOutputBuffer.java:121)
>
>         at
> org.apache.coyote.http11.AbstractHttp11Processor.endRequest(AbstractHttp11Processor.java:1653)
>
>         at
> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1046)
>
>         at
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:585)
>
>         at
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1653)
>
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>
>         at java.lang.Thread.run(Thread.java:722)
>
>
>
>
>
> Nothing else helpful seems to show up in the logs before it starts
> happening.  This ONLY happens on the tomcat instance on a separate machine
> from Nginx.  Any ideas what might be happening and how it can be resolved?
>  We’re not even sure this is related to tomcat or something in the
> communications before it gets to tomcat, but we’re looking at all options
> right now.  Thanks,
>
>
>
> Kevin
>
>
>
>
>
>
>
>   _____
>
> I am using the Free version of SPAMfighter <http://www.spamfighter.com/len>
> .
> SPAMfighter has removed 3 of my spam emails to date.
>
> Do you have a slow PC? <
> http://www.spamfighter.com/SLOW-PCfighter?cid=sigen>  Try a free scan!
>
> Hi Kevin,

I'm not nginx nor tomcat expert but it looks like tomcat gets interrupted
during sending the response back ie like the connection gets closed whiles
it's still flushing the output buffer.
Have you done any tuning of the http connections and tcp timeout maybe in
nginx and set the timeout too low? Have you checked for possible network
latency (I know you said they are in the same rackspace but doesn't hurt to
ask), switch problems etc? What else is between nginx and tomcat 2? Can you
see in the nginx logs how much time the requests to instance 1 and instance
2 take? Also by comparing timestamps you should be able to find in nginx
the request that failed (there must be error on nginx side too) and see if
it happens on small or big data streams (check the data size in the log
line) etc.

So my point is start troubleshooting on nginx side until you get response
from some of the more experienced tomcat users/developers here :) And get
ready to send your NIO connector and related nginx settings too I would say
:)

Igor

Reply via email to