Re: Please help diagnosing a random production Tomcat 7.0.53 Internal Server Error!

2014-04-15 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Ian,

On 4/15/14, 2:52 PM, Ian Long wrote:
 I need some help from all the tomcat experts out there! I am using
  tomcat behind apache httpd using mod_jk (1.2.39). About 50-100
 times per day (out of many requests), I’m getting an internal
 server error from Tomcat (error 500), without any exceptions in my
 code, nor in Tomcat logs. I am only seeing the error in my New
 Relic application monitoring tool, and I can see them in the mod_jk
 logs if I turn on debug.

As much fun as reading debug logs is, I wasn't able to find a problem
in what you posted. Can you maybe highlight the section that indicates
a problem?

You also didn't post the exception from the Java side.

 My server is not heavily loaded, with a load average hovering
 around 0.5 on a 4 cpu system.

How many httpd processes are serving this Tomcat? Do you have a
mismatch between the number of connections coming from httpd and the
number of connections available on the Tomcat side (Connector)?

 You can see the internal error below at 13:59:13.790.

Yes, we can see that there was an error, but not what the error was.

 My worker setup is very simple:
 
 worker.list=worker1 worker.worker1.port=8009 
 worker.worker1.host=127.0.0.1 worker.worker1.type=ajp13 
 worker.worker1.connection_pool_timeout=600 
 worker.worker1.connect_timeout=1
 
 My Connector is also straightforward:
 
 Connector port=8009 connectionTimeout=60
 minSpareThreads=5 address=127.0.0.1 URIEncoding=UTF-8
 enableLookups=false disableUploadTimeout=true 
 maxSpareThreads=75 maxThreads=800 protocol=AJP/1.3 /

That all looks okay to me on the face of it. Just a note: you may want
to use an Executor for better control of the thread pool.

What connector are you actually using

Is 800 threads enough to handle whatever might be coming from httpd
(or all of your httpd instances)?

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJTTYSWAAoJEBzwKT+lPKRYzAkP/1Eeusa0Jh6uFoUFg0+wq/cO
IF8F0DkQXZ5d1WrYtF3nRhcNXclgfl6kYpNyz9dSN55Sk2hrWFZcSGZnMu4ZZvQE
jMg+555tPm36QmmAw3NzUm6wwTpcByjsZuj10fsigiaNW3ucAc2vsQ40ETH5LH+/
E4crD6PFBSfNe5qcF51T6qcPVMMaXxjd5aBWRBfT2sUEogRg3o5Xm6zal+fwQrfR
v4mbvwC4bz7ysCXGZQxSh7qQrorpXePIqCrUekAXxPRGxGXbUvj8+alVjY7p0Him
w5WyyzbEqIymrARoip/+Xd1nRe7bWdt0sUBqBsKn7KKvUVvvIMbKmtAn398zcP9k
l9746MuX0Z9JGuCNDeX/giaUeijckjyY2WjxWY/mU9v75v02jqpPlgzZZhELKv/3
ScE13HgxzPHAiDNXHJuQsJL8HxRbtl29aPV+406kQbolzfMudxXPU2hSIi8MDiYn
hTJSZwp47bQngD9Ym8v+EdeExvRg2xLhlIuJc5j+34E9J5R9p/QC7Ru6YyzpESO5
olTzG/5Dt4V75q7mRkMtNiIWku9Ur5dtD+wjLAcQPmcoUuN0pX+rl2L4a7Wp+mqO
rCuEZK5Y1S6/DBlu7UcBALe/T0OG8nzld4xLKZJR/oluuQSRXlRw6div4DaoqQMf
4PjjoG0+Hj2KS2aQm/JQ
=oqiw
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Please help diagnosing a random production Tomcat 7.0.53 Internal Server Error!

2014-04-15 Thread Ian Long
Thanks for the reply.

It looks to me like tomcat just gave up partway through generating the request, 
I’m trying to figure out why.  

There are no exceptions in either my application logs or the tomcat log itself, 
which is frustrating.

Thanks, I’ll look into the executor.

Apache matches what is set in my connector:

IfModule prefork.c
StartServers       8
MinSpareServers    5
MaxSpareServers   20
ServerLimit      800
MaxClients       800
MaxRequestsPerChild  0
/IfModule

Yes, the connector settings should be fine, there are usually less than 20 
httpds.

Cheers,
Ian

On April 15, 2014 at 3:13:08 PM, Christopher Schultz 
(ch...@christopherschultz.net) wrote:

-BEGIN PGP SIGNED MESSAGE-  
Hash: SHA256  

Ian,  

On 4/15/14, 2:52 PM, Ian Long wrote:  
 I need some help from all the tomcat experts out there! I am using  
 tomcat behind apache httpd using mod_jk (1.2.39). About 50-100  
 times per day (out of many requests), I’m getting an internal  
 server error from Tomcat (error 500), without any exceptions in my  
 code, nor in Tomcat logs. I am only seeing the error in my New  
 Relic application monitoring tool, and I can see them in the mod_jk  
 logs if I turn on debug.  

As much fun as reading debug logs is, I wasn't able to find a problem  
in what you posted. Can you maybe highlight the section that indicates  
a problem?  

You also didn't post the exception from the Java side.  

 My server is not heavily loaded, with a load average hovering  
 around 0.5 on a 4 cpu system.  

How many httpd processes are serving this Tomcat? Do you have a  
mismatch between the number of connections coming from httpd and the  
number of connections available on the Tomcat side (Connector)?  

 You can see the internal error below at 13:59:13.790.  

Yes, we can see that there was an error, but not what the error was.  

 My worker setup is very simple:  
  
 worker.list=worker1 worker.worker1.port=8009  
 worker.worker1.host=127.0.0.1 worker.worker1.type=ajp13  
 worker.worker1.connection_pool_timeout=600  
 worker.worker1.connect_timeout=1  
  
 My Connector is also straightforward:  
  
 Connector port=8009 connectionTimeout=60  
 minSpareThreads=5 address=127.0.0.1 URIEncoding=UTF-8  
 enableLookups=false disableUploadTimeout=true  
 maxSpareThreads=75 maxThreads=800 protocol=AJP/1.3 /  

That all looks okay to me on the face of it. Just a note: you may want  
to use an Executor for better control of the thread pool.  

What connector are you actually using  

Is 800 threads enough to handle whatever might be coming from httpd  
(or all of your httpd instances)?  

- -chris  
-BEGIN PGP SIGNATURE-  
Version: GnuPG v1  
Comment: GPGTools - http://gpgtools.org  
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/  

iQIcBAEBCAAGBQJTTYSWAAoJEBzwKT+lPKRYzAkP/1Eeusa0Jh6uFoUFg0+wq/cO  
IF8F0DkQXZ5d1WrYtF3nRhcNXclgfl6kYpNyz9dSN55Sk2hrWFZcSGZnMu4ZZvQE  
jMg+555tPm36QmmAw3NzUm6wwTpcByjsZuj10fsigiaNW3ucAc2vsQ40ETH5LH+/  
E4crD6PFBSfNe5qcF51T6qcPVMMaXxjd5aBWRBfT2sUEogRg3o5Xm6zal+fwQrfR  
v4mbvwC4bz7ysCXGZQxSh7qQrorpXePIqCrUekAXxPRGxGXbUvj8+alVjY7p0Him  
w5WyyzbEqIymrARoip/+Xd1nRe7bWdt0sUBqBsKn7KKvUVvvIMbKmtAn398zcP9k  
l9746MuX0Z9JGuCNDeX/giaUeijckjyY2WjxWY/mU9v75v02jqpPlgzZZhELKv/3  
ScE13HgxzPHAiDNXHJuQsJL8HxRbtl29aPV+406kQbolzfMudxXPU2hSIi8MDiYn  
hTJSZwp47bQngD9Ym8v+EdeExvRg2xLhlIuJc5j+34E9J5R9p/QC7Ru6YyzpESO5  
olTzG/5Dt4V75q7mRkMtNiIWku9Ur5dtD+wjLAcQPmcoUuN0pX+rl2L4a7Wp+mqO  
rCuEZK5Y1S6/DBlu7UcBALe/T0OG8nzld4xLKZJR/oluuQSRXlRw6div4DaoqQMf  
4PjjoG0+Hj2KS2aQm/JQ  
=oqiw  
-END PGP SIGNATURE-  

-  
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org  
For additional commands, e-mail: users-h...@tomcat.apache.org  



Re: Please help diagnosing a random production Tomcat 7.0.53 Internal Server Error!

2014-04-15 Thread Ian Long
Forgot to mention that it looks like tomcat returned around 50% of what the 
page should have been, before it hit the Internal Server Error.

Cheers,
Ian


On April 15, 2014 at 3:13:08 PM, Christopher Schultz 
(ch...@christopherschultz.net) wrote:

-BEGIN PGP SIGNED MESSAGE-  
Hash: SHA256  

Ian,  

On 4/15/14, 2:52 PM, Ian Long wrote:  
 I need some help from all the tomcat experts out there! I am using  
 tomcat behind apache httpd using mod_jk (1.2.39). About 50-100  
 times per day (out of many requests), I’m getting an internal  
 server error from Tomcat (error 500), without any exceptions in my  
 code, nor in Tomcat logs. I am only seeing the error in my New  
 Relic application monitoring tool, and I can see them in the mod_jk  
 logs if I turn on debug.  

As much fun as reading debug logs is, I wasn't able to find a problem  
in what you posted. Can you maybe highlight the section that indicates  
a problem?  

You also didn't post the exception from the Java side.  

 My server is not heavily loaded, with a load average hovering  
 around 0.5 on a 4 cpu system.  

How many httpd processes are serving this Tomcat? Do you have a  
mismatch between the number of connections coming from httpd and the  
number of connections available on the Tomcat side (Connector)?  

 You can see the internal error below at 13:59:13.790.  

Yes, we can see that there was an error, but not what the error was.  

 My worker setup is very simple:  
  
 worker.list=worker1 worker.worker1.port=8009  
 worker.worker1.host=127.0.0.1 worker.worker1.type=ajp13  
 worker.worker1.connection_pool_timeout=600  
 worker.worker1.connect_timeout=1  
  
 My Connector is also straightforward:  
  
 Connector port=8009 connectionTimeout=60  
 minSpareThreads=5 address=127.0.0.1 URIEncoding=UTF-8  
 enableLookups=false disableUploadTimeout=true  
 maxSpareThreads=75 maxThreads=800 protocol=AJP/1.3 /  

That all looks okay to me on the face of it. Just a note: you may want  
to use an Executor for better control of the thread pool.  

What connector are you actually using  

Is 800 threads enough to handle whatever might be coming from httpd  
(or all of your httpd instances)?  

- -chris  
-BEGIN PGP SIGNATURE-  
Version: GnuPG v1  
Comment: GPGTools - http://gpgtools.org  
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/  

iQIcBAEBCAAGBQJTTYSWAAoJEBzwKT+lPKRYzAkP/1Eeusa0Jh6uFoUFg0+wq/cO  
IF8F0DkQXZ5d1WrYtF3nRhcNXclgfl6kYpNyz9dSN55Sk2hrWFZcSGZnMu4ZZvQE  
jMg+555tPm36QmmAw3NzUm6wwTpcByjsZuj10fsigiaNW3ucAc2vsQ40ETH5LH+/  
E4crD6PFBSfNe5qcF51T6qcPVMMaXxjd5aBWRBfT2sUEogRg3o5Xm6zal+fwQrfR  
v4mbvwC4bz7ysCXGZQxSh7qQrorpXePIqCrUekAXxPRGxGXbUvj8+alVjY7p0Him  
w5WyyzbEqIymrARoip/+Xd1nRe7bWdt0sUBqBsKn7KKvUVvvIMbKmtAn398zcP9k  
l9746MuX0Z9JGuCNDeX/giaUeijckjyY2WjxWY/mU9v75v02jqpPlgzZZhELKv/3  
ScE13HgxzPHAiDNXHJuQsJL8HxRbtl29aPV+406kQbolzfMudxXPU2hSIi8MDiYn  
hTJSZwp47bQngD9Ym8v+EdeExvRg2xLhlIuJc5j+34E9J5R9p/QC7Ru6YyzpESO5  
olTzG/5Dt4V75q7mRkMtNiIWku9Ur5dtD+wjLAcQPmcoUuN0pX+rl2L4a7Wp+mqO  
rCuEZK5Y1S6/DBlu7UcBALe/T0OG8nzld4xLKZJR/oluuQSRXlRw6div4DaoqQMf  
4PjjoG0+Hj2KS2aQm/JQ  
=oqiw  
-END PGP SIGNATURE-  

-  
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org  
For additional commands, e-mail: users-h...@tomcat.apache.org  



Re: Please help diagnosing a random production Tomcat 7.0.53 Internal Server Error!

2014-04-15 Thread Konstantin Kolinko
2014-04-15 22:52 GMT+04:00 Ian Long ian.l...@opterus.com:
 Hi All,

 I need some help from all the tomcat experts out there!  I am using tomcat 
 behind apache httpd using mod_jk (1.2.39).  About 50-100 times per day (out 
 of many requests), I’m getting an internal server error from Tomcat (error 
 500), without any exceptions in my code, nor in Tomcat logs.  I am only 
 seeing the error in my New Relic application monitoring tool, and I can see 
 them in the mod_jk logs if I turn on debug.

Can you update to 1.2.40 released today? It fixes several issues.

Is error 500 mentioned in Access log at Tomcat side?

If an error happens at some early state of processing (in Connector,
in CoyoteAdapter), then there may be nothing in the
catalina/localhost/web application logs, unless you turn on debug
logging at Tomcat side.

Best regards,
Konstantin Kolinko

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Please help diagnosing a random production Tomcat 7.0.53 Internal Server Error!

2014-04-15 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Ian,

On 4/15/14, 3:33 PM, Ian Long wrote:
 Thanks for the reply.
 
 It looks to me like tomcat just gave up partway through generating
 the request, I’m trying to figure out why.
 
 There are no exceptions in either my application logs or the tomcat
 log itself, which is frustrating.

Definitely. You checked catalina.out (or wherever stdout goes) as well
as your application's logs?

 Thanks, I’ll look into the executor.
 
 Apache matches what is set in my connector:
 
 IfModule prefork.c StartServers   8 MinSpareServers5 
 MaxSpareServers   20 ServerLimit  800 MaxClients   800 
 MaxRequestsPerChild  0 /IfModule
 
 Yes, the connector settings should be fine, there are usually less
 than 20 httpds.

You mean 20 httpd prefork processes, right? That should be fine: it
means you will need 20 connections available in Tomcat.

 Forgot to mention that it looks like tomcat returned around 50% of
 what the page should have been, before it hit the Internal Server
 Error.

Have you run out of memory or anything like that?

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJTTY7wAAoJEBzwKT+lPKRYj6MQALTNWcCMZ7KI3+9PqBml7cId
isbPrfJuSfVbta6lJXI8yuwjm6V/CvFc1l7WK2/qBosXO0jMopCZvCkJOOwVyAYt
6cozaLsH1YeFFfoOT6t7d/QhAjiWtlT+/sxX80dW/7t8uwbTQ7Bji01I3dtvYQsF
f//HWfwDPSaxWBeXqZZ9bAG2uW7kiEExThlgQYbfcUnMPNB9Rc382GbI2/vIAtaR
9fWARiaLWfv4oaLzv67zAnFO/LV61HtLzA9PSy68do3AzZs0GvzKPPHlMdkobeGi
lBUeSA8t9ZH7qetBaUUEto50cE5KnPtRVQG4bpA+9BrUyKHFxeyrB+rT3s1EhUlZ
dH+QfioMEVQEAX/9tidA8pyWHiSNGYKCc2mAiIO2ahGWnx+IpUXOJz6bi0QnDJhp
KeGrMrrV0R6fcUXoDiQzQGRTtWriJvl8VkP/eow3BpUeLO0ICdfYd9jOn5e0xtMV
kO6X4N8aALyoTXtFm/0xTl01vXa5ZCWDdHRdtifcO9qAzHuGFYEjMaMeyUg08RAc
BeSW3K8B2gAoXcilgOPAxuae9NRRwyius+tC0lLi/LvQRRbpAxBTV9Gv/BT/fbjU
xndD+hVRiGcEoCmydngpmkGwqrroCfDWSyw4kYSxP9sGPRhNi3yPL3VlFBJXGUaC
mfJtAJ7Rp6Ch6KKzY/oS
=ag/e
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Please help diagnosing a random production Tomcat 7.0.53 Internal Server Error!

2014-04-15 Thread André Warnier

Christopher Schultz wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Ian,

On 4/15/14, 3:33 PM, Ian Long wrote:

Thanks for the reply.

It looks to me like tomcat just gave up partway through generating
the request, I’m trying to figure out why.

There are no exceptions in either my application logs or the tomcat
log itself, which is frustrating.


Definitely. You checked catalina.out (or wherever stdout goes) as well
as your application's logs?


Thanks, I’ll look into the executor.

Apache matches what is set in my connector:

IfModule prefork.c StartServers   8 MinSpareServers5 
MaxSpareServers   20 ServerLimit  800 MaxClients   800 
MaxRequestsPerChild  0 /IfModule


Yes, the connector settings should be fine, there are usually less
than 20 httpds.


You mean 20 httpd prefork processes, right? That should be fine: it
means you will need 20 connections available in Tomcat.


Forgot to mention that it looks like tomcat returned around 50% of
what the page should have been, before it hit the Internal Server
Error.


Have you run out of memory or anything like that?


I was going to ask the same thing, slightly differently.

I can think of a scenario which might result in the same kind of symptoms, only I am not 
sure if it makes sense, Java-wise.


A request is recived by httpd, which passes it to Tomcat via mod_jk.
Tomcat allocates a thread to handle the request, and this thread starts running the 
corresponding application (webapp).  The webapp starts processing the request, produces 
some output, and then for some reason to be determined, it suddenly runs out of memory, 
and the thread running the application dies.
Because Tomcat has temporarily run out of memory, there is no way for the application to 
write anything to the logs, because this would require allocating some additional memory 
to do so, and there isn't any available.
So Tomcat just catches (a posteriori) the fact that the thread died, returning an error 
500 to mod_jk and httpd.
As soon as the offending thread dies, some memory is freed, and Tomcat appears to work 
normally again, including other requests to that same application, because those other 
requests do not cause the same spike in memory usage.


Tomcat/Java experts : Could something like this happen, and would it match the symptoms as 
described by Ian ?


And Ian, could it be that some requests to that application, because maybe of a parameter 
that is different from the other cases, could cause such a spike in memory requirements ?



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Please help diagnosing a random production Tomcat 7.0.53 Internal Server Error!

2014-04-15 Thread Ian Long
I don’t think it’s memory related - Tomcat is allocated an 8GB heap and 
according to New Relic it has never used more than 6.5G; there is also lots of 
PermGen space available.

Cheers,
Ian


On April 15, 2014 at 4:18:11 PM, André Warnier (a...@ice-sa.com) wrote:

Christopher Schultz wrote:  
 -BEGIN PGP SIGNED MESSAGE-  
 Hash: SHA256  
  
 Ian,  
  
 On 4/15/14, 3:33 PM, Ian Long wrote:  
 Thanks for the reply.  
  
 It looks to me like tomcat just gave up partway through generating  
 the request, I’m trying to figure out why.  
  
 There are no exceptions in either my application logs or the tomcat  
 log itself, which is frustrating.  
  
 Definitely. You checked catalina.out (or wherever stdout goes) as well  
 as your application's logs?  
  
 Thanks, I’ll look into the executor.  
  
 Apache matches what is set in my connector:  
  
 IfModule prefork.c StartServers 8 MinSpareServers 5  
 MaxSpareServers 20 ServerLimit 800 MaxClients 800  
 MaxRequestsPerChild 0 /IfModule  
  
 Yes, the connector settings should be fine, there are usually less  
 than 20 httpds.  
  
 You mean 20 httpd prefork processes, right? That should be fine: it  
 means you will need 20 connections available in Tomcat.  
  
 Forgot to mention that it looks like tomcat returned around 50% of  
 what the page should have been, before it hit the Internal Server  
 Error.  
  
 Have you run out of memory or anything like that?  

I was going to ask the same thing, slightly differently.  

I can think of a scenario which might result in the same kind of symptoms, only 
I am not  
sure if it makes sense, Java-wise.  

A request is recived by httpd, which passes it to Tomcat via mod_jk.  
Tomcat allocates a thread to handle the request, and this thread starts running 
the  
corresponding application (webapp). The webapp starts processing the request, 
produces  
some output, and then for some reason to be determined, it suddenly runs out of 
memory,  
and the thread running the application dies.  
Because Tomcat has temporarily run out of memory, there is no way for the 
application to  
write anything to the logs, because this would require allocating some 
additional memory  
to do so, and there isn't any available.  
So Tomcat just catches (a posteriori) the fact that the thread died, returning 
an error  
500 to mod_jk and httpd.  
As soon as the offending thread dies, some memory is freed, and Tomcat appears 
to work  
normally again, including other requests to that same application, because 
those other  
requests do not cause the same spike in memory usage.  

Tomcat/Java experts : Could something like this happen, and would it match the 
symptoms as  
described by Ian ?  

And Ian, could it be that some requests to that application, because maybe of a 
parameter  
that is different from the other cases, could cause such a spike in memory 
requirements ?  


-  
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org  
For additional commands, e-mail: users-h...@tomcat.apache.org  



Re: Please help diagnosing a random production Tomcat 7.0.53 Internal Server Error!

2014-04-15 Thread Ian Long
Yes, I checked both the tomcat log (I’ve configured tomcat to use log4j) as 
well as my application logs.

Yes, 20 httpd prefork processes.

I don’t think it’s memory related, I have an 8GB heap and tomcat averages 5GB 
usage and peeks around 6.5 before garbage collection kicks in.

Cheers,
Ian


On April 15, 2014 at 3:57:04 PM, Christopher Schultz 
(ch...@christopherschultz.net) wrote:

-BEGIN PGP SIGNED MESSAGE-  
Hash: SHA256  

Ian,  

On 4/15/14, 3:33 PM, Ian Long wrote:  
 Thanks for the reply.  
  
 It looks to me like tomcat just gave up partway through generating  
 the request, I’m trying to figure out why.  
  
 There are no exceptions in either my application logs or the tomcat  
 log itself, which is frustrating.  

Definitely. You checked catalina.out (or wherever stdout goes) as well  
as your application's logs?  

 Thanks, I’ll look into the executor.  
  
 Apache matches what is set in my connector:  
  
 IfModule prefork.c StartServers 8 MinSpareServers 5  
 MaxSpareServers 20 ServerLimit 800 MaxClients 800  
 MaxRequestsPerChild 0 /IfModule  
  
 Yes, the connector settings should be fine, there are usually less  
 than 20 httpds.  

You mean 20 httpd prefork processes, right? That should be fine: it  
means you will need 20 connections available in Tomcat.  

 Forgot to mention that it looks like tomcat returned around 50% of  
 what the page should have been, before it hit the Internal Server  
 Error.  

Have you run out of memory or anything like that?  

- -chris  
-BEGIN PGP SIGNATURE-  
Version: GnuPG v1  
Comment: GPGTools - http://gpgtools.org  
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/  

iQIcBAEBCAAGBQJTTY7wAAoJEBzwKT+lPKRYj6MQALTNWcCMZ7KI3+9PqBml7cId  
isbPrfJuSfVbta6lJXI8yuwjm6V/CvFc1l7WK2/qBosXO0jMopCZvCkJOOwVyAYt  
6cozaLsH1YeFFfoOT6t7d/QhAjiWtlT+/sxX80dW/7t8uwbTQ7Bji01I3dtvYQsF  
f//HWfwDPSaxWBeXqZZ9bAG2uW7kiEExThlgQYbfcUnMPNB9Rc382GbI2/vIAtaR  
9fWARiaLWfv4oaLzv67zAnFO/LV61HtLzA9PSy68do3AzZs0GvzKPPHlMdkobeGi  
lBUeSA8t9ZH7qetBaUUEto50cE5KnPtRVQG4bpA+9BrUyKHFxeyrB+rT3s1EhUlZ  
dH+QfioMEVQEAX/9tidA8pyWHiSNGYKCc2mAiIO2ahGWnx+IpUXOJz6bi0QnDJhp  
KeGrMrrV0R6fcUXoDiQzQGRTtWriJvl8VkP/eow3BpUeLO0ICdfYd9jOn5e0xtMV  
kO6X4N8aALyoTXtFm/0xTl01vXa5ZCWDdHRdtifcO9qAzHuGFYEjMaMeyUg08RAc  
BeSW3K8B2gAoXcilgOPAxuae9NRRwyius+tC0lLi/LvQRRbpAxBTV9Gv/BT/fbjU  
xndD+hVRiGcEoCmydngpmkGwqrroCfDWSyw4kYSxP9sGPRhNi3yPL3VlFBJXGUaC  
mfJtAJ7Rp6Ch6KKzY/oS  
=ag/e  
-END PGP SIGNATURE-  

-  
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org  
For additional commands, e-mail: users-h...@tomcat.apache.org  



Re: Please help diagnosing a random production Tomcat 7.0.53 Internal Server Error!

2014-04-15 Thread André Warnier

Ian,

On this list, it is kind of frowned-upon to top post. It is preferred when people answer 
a question, below the question.  Keeps things more logical in the reading sequence, and 
avoids having to scroll down to guess what you are responding to.


Ian Long wrote:

Yes, I checked both the tomcat log (I’ve configured tomcat to use log4j) as 
well as my application logs.

Yes, 20 httpd prefork processes.

I don’t think it’s memory related, I have an 8GB heap and tomcat averages 5GB 
usage and peeks around 6.5 before garbage collection kicks in.



Of course we do not know (yet) either what the cause of your problem is.
But we know that Tomcat would normally write something in its logs, when a server error 
500 happens.

So,
- either Tomcat and /or your application wrote something to a logfile, and you have not 
yet found that logfile

- or else Tomcat and/or your application crashed, but did not write anything to 
the logs.
In that last case, one of the most likely causes for such a behaviour is running out of 
memory.

Whether you believe that this is possible or not is your opinion.
But it is of the nature of software bugs, to be unexpected.
If they were expected, they would have been corrected already.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Please help diagnosing a random production Tomcat 7.0.53 Internal Server Error!

2014-04-15 Thread Ian Long

On April 15, 2014 at 4:58:28 PM, André Warnier 
(a...@ice-sa.com(mailto:a...@ice-sa.com)) wrote:

 Ian,
  
 On this list, it is kind of frowned-upon to top post. It is preferred when 
 people answer
 a question, below the question. Keeps things more logical in the reading 
 sequence, and
 avoids having to scroll down to guess what you are responding to.
  
 Ian Long wrote:
  Yes, I checked both the tomcat log (I’ve configured tomcat to use log4j) as 
  well as my application logs.
 
  Yes, 20 httpd prefork processes.
 
  I don’t think it’s memory related, I have an 8GB heap and tomcat averages 
  5GB usage and peeks around 6.5 before garbage collection kicks in.
 
  
 Of course we do not know (yet) either what the cause of your problem is.
 But we know that Tomcat would normally write something in its logs, when a 
 server error
 500 happens.
 So,
 - either Tomcat and /or your application wrote something to a logfile, and 
 you have not
 yet found that logfile
 - or else Tomcat and/or your application crashed, but did not write anything 
 to the logs.
 In that last case, one of the most likely causes for such a behaviour is 
 running out of
 memory.
 Whether you believe that this is possible or not is your opinion.
 But it is of the nature of software bugs, to be unexpected.
 If they were expected, they would have been corrected already.
  

Ok, thanks, didn’t know about the top post issue.

I have tomcat configured to log via log4j, and then there is my application 
log, those are the only two logs, and neither contains anything.

It’s not about believing, I have monitoring software that gives me precise 
information about memory use and there is no indication of a problem there.

Thanks,
Ian

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Please help diagnosing a random production Tomcat 7.0.53 Internal Server Error!

2014-04-15 Thread André Warnier

Ian Long wrote:

On April 15, 2014 at 4:58:28 PM, André Warnier 
(a...@ice-sa.com(mailto:a...@ice-sa.com)) wrote:


Ian,
 
On this list, it is kind of frowned-upon to top post. It is preferred when people answer

a question, below the question. Keeps things more logical in the reading 
sequence, and
avoids having to scroll down to guess what you are responding to.
 
Ian Long wrote:

Yes, I checked both the tomcat log (I’ve configured tomcat to use log4j) as 
well as my application logs.

Yes, 20 httpd prefork processes.

I don’t think it’s memory related, I have an 8GB heap and tomcat averages 5GB 
usage and peeks around 6.5 before garbage collection kicks in.

 
Of course we do not know (yet) either what the cause of your problem is.

But we know that Tomcat would normally write something in its logs, when a 
server error
500 happens.
So,
- either Tomcat and /or your application wrote something to a logfile, and you 
have not
yet found that logfile
- or else Tomcat and/or your application crashed, but did not write anything to 
the logs.
In that last case, one of the most likely causes for such a behaviour is 
running out of
memory.
Whether you believe that this is possible or not is your opinion.
But it is of the nature of software bugs, to be unexpected.
If they were expected, they would have been corrected already.
 


Ok, thanks, didn’t know about the top post issue.

I have tomcat configured to log via log4j, and then there is my application 
log, those are the only two logs, and neither contains anything.

It’s not about believing, I have monitoring software that gives me precise 
information about memory use and there is no indication of a problem there.



Would that monitoring software detect a very short occasional spike in the usage of 
memory, just before the thread running that application is blown out of the water and the 
memory usage returns to normal ?
Or is it something that updates its data on a 5-second interval and it just always misses 
the significant event ?


Honestly, I am just fishing and trying to find a clue (or rather, trying to help you find 
a clue). But some problems are just like that. You can only carefully eliminate the 
possible causes one after the other until you're left with one that you cannot eliminate.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Please help diagnosing a random production Tomcat 7.0.53 Internal Server Error!

2014-04-15 Thread Tim Watts
On Tue, 2014-04-15 at 17:12 -0400, Ian Long wrote:
   
  Ian Long wrote:
 I have tomcat configured to log via log4j, and then there is my
 application log, those are the only two logs, and neither contains
 anything.

They're empty?  Are you sure the logs are writable?  How much free space
is available on the file system where the logs reside?


 It’s not about believing, I have monitoring software that gives me
 precise information about memory use and there is no indication of a
 problem there.
 
 Thanks,
 Ian
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Please help diagnosing a random production Tomcat 7.0.53 Internal Server Error!

2014-04-15 Thread Ian Long
 
On April 15, 2014 at 6:50:05 PM, Tim Watts 
(t...@cliftonfarm.org(mailto:t...@cliftonfarm.org)) wrote:
 On Tue, 2014-04-15 at 17:12 -0400, Ian Long wrote:
  
   Ian Long wrote:
  I have tomcat configured to log via log4j, and then there is my
  application log, those are the only two logs, and neither contains
  anything.
  
 They're empty? Are you sure the logs are writable? How much free space
 is available on the file system where the logs reside?
  
  
  It’s not about believing, I have monitoring software that gives me
  precise information about memory use and there is no indication of a
  problem there.
 
  Thanks,
  Ian
 
  -
  To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
  For additional commands, e-mail: users-h...@tomcat.apache.org
 

Sorry, I should have been more clear.  No, they are not empty, things
are being logged in both files, just not specifically for this problem.

There are no errors in the logs corresponding to the time I see the error
recorded in New Relic.

There is more than 100GB of free space on the drive.

Cheers,
Ian

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org