Re: Instances of org.apache.coyote.RequestInfo accumulate in RequestGroupInfo.processors and eventually result in OOME

2018-04-22 Thread Ivan Lar

Hello,

Here is the path to GC roots for a single RequestInfo object. However, 
it does not tell me anything. At least I don't see that my application 
is holding up the resources.


Could you see anything relevant here?

Class Name | Shallow Heap | Retained Heap
---
|  |
org.apache.coyote.RequestInfo @ 0xf4a2ce18 |   88 |   105 504
|- [955] java.lang.Object[1234] @ 0xf35b5988 |    4 952 | 4 952
|  '- elementData java.util.ArrayList @ 0xf73df080 |   24 
| 4 976
| '- processors org.apache.coyote.RequestGroupInfo @ 0xf72eaa30 
|   56 | 5 032
|    '- global 
org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler @ 
0xf73de248  |   32 |   800
|   |- handler org.apache.tomcat.util.net.NioEndpoint @ 
0xf72eab48 |  288 |    60 232
|   |  |- this$0 org.apache.tomcat.util.net.NioEndpoint$Acceptor 
@ 0xf850f0e0   |   24 
|    24
|   |  |  '- , target java.lang.Thread @ 0xf850ef68  
http-nio-443-Acceptor-0 Thread |  120 |   400
|   |  |- this$0 org.apache.tomcat.util.net.NioEndpoint$Poller @ 
0xf850f320 |   48 
|   632
|   |  |  |- , target java.lang.Thread @ 0xf850f0f8  
http-nio-443-ClientPoller-0 Thread |  120 | 4 752
|   |  |  |- poller 
org.apache.tomcat.util.net.NioEndpoint$KeyAttachment @ 
0xf6506640   |  112 | 57 400
|   |  |  |  '- attachment sun.nio.ch.SelectionKeyImpl @ 
0xf6506788 |   40 |    57 976
|   |  |  | |- value java.util.HashMap$Node @ 0xf65067b0 
|   32 |    32
|   |  |  | |  '- [101] java.util.HashMap$Node[256] @ 
0xf56a40e0    | 1 040 | 
1 136
|   |  |  | | '- table java.util.HashMap @ 0xf8519708 
|   48 | 1 184
|   |  |  | |    '- fdToKey sun.nio.ch.EPollSelectorImpl 
@ 0xf850f2b8   |   72 | 
1 512
|   |  |  | |   |-  java.lang.Thread @ 
0xf850f0f8  http-nio-443-ClientPoller-0 Thread   |  120 
| 4 752
|   |  |  | |   |- this$0 
java.nio.channels.spi.AbstractSelector$1 @ 
0xf8518540 |   16 |    16
|   |  |  | |   |  '- blocker java.lang.Thread @ 
0xf850f0f8  http-nio-443-ClientPoller-0 Thread |  120 
| 4 752

|   |  |  | |   '- Total: 2 entries |  |
|   |  |  | |- key java.util.HashMap$Node @ 0xf65067d0 
|   32 |    32
|   |  |  | |  '- [105] java.util.HashMap$Node[256] @ 
0xf56a44f0    | 1 040 | 
1 136
|   |  |  | | '- table java.util.HashMap @ 0xf8526d88 
|   48 | 1 200
|   |  |  | |    '- map java.util.HashSet @ 0xf85196f8 
|   16 | 1 216
|   |  |  | |   '- c 
java.util.Collections$UnmodifiableSet @ 
0xf850f310 |   16 |    16
|   |  |  | |  '-  java.lang.Thread 
@ 0xf850f0f8  http-nio-443-ClientPoller-0 Thread|  120 | 
4 752

|   |  |  | '- Total: 2 entries |  |
|   |  |  '- Total: 2 entries |  |
|   |  '- Total: 2 entries |  |
|   |- cHandler org.apache.coyote.http11.Http11NioProtocol @ 
0xf72ea530 | 128 |   200

|   '- Total: 2 entries |  |
|- resource org.apache.tomcat.util.modeler.BaseModelMBean @ 0xf4a2fba0 
|   40 |    40

'- Total: 2 entries |  |
---


15.03.2018 11:53, Suvendu Sekhar Mondal пишет:

On Wed, Mar 14, 2018 at 2:19 AM, Industrious  wrote:

Hello, Mark,

Thanks for your attention.

Could you take a look at the class histogram from today's OOME heap dump?
Maybe it could provide some details.
I see a spike in CPU usage at the approximate time the dump was
generated but that might be caused by the garbage collector's futile
attempt to free up memory.

That's correct. Aggressive GCs causes that.


Class Name |   Objects |
Shallow Heap |  Retained Heap
---
   

Re: Instances of org.apache.coyote.RequestInfo accumulate in RequestGroupInfo.processors and eventually result in OOME

2018-03-15 Thread Suvendu Sekhar Mondal
On Wed, Mar 14, 2018 at 2:19 AM, Industrious  wrote:
> Hello, Mark,
>
> Thanks for your attention.
>
> Could you take a look at the class histogram from today's OOME heap dump?
> Maybe it could provide some details.
> I see a spike in CPU usage at the approximate time the dump was
> generated but that might be caused by the garbage collector's futile
> attempt to free up memory.

That's correct. Aggressive GCs causes that.

> Class Name |   Objects |
> Shallow Heap |  Retained Heap
> ---
>|   |
>|
> org.apache.coyote.RequestInfo  | 1 118 |
> 98 384 | >= 169 690 688
> org.apache.coyote.Request  | 1 118 |
>187 824 | >= 169 564 168
> byte[] |70 337 |
> 120 141 712 | >= 120 141 712


>From your problem description it seems that you are having a slow leak
which is crashing your Tomcat after few days. Are those RequestInfo
objects are of same size? or, some of them are large whereas rest of
them are small in size? Same question goes for byte arrays. From
output format it seems that you are using MAT. I will suggest to check
"path to gc roots" for those objects first. That will tell you who is
keeping them alive. Also please check the thread(s) associated with
those Byte arrays and RequestInfo objects. That will tell you if any
application thread is involved or not.

> org.apache.tomcat.util.net.SecureNioChannel|   985 |
> 47 280 |  >= 59 649 592
> char[] |   128 612 |
> 55 092 504 |  >= 55 092 504
> org.apache.tomcat.util.buf.CharChunk   |89 026 |
>  4 273 248 |  >= 41 134 168
> java.nio.HeapByteBuffer| 4 256 |
>204 288 |  >= 33 834 864
> org.apache.tomcat.util.net.NioEndpoint$NioBufferHandler|   985 |
> 23 640 |  >= 33 482 120
> org.apache.catalina.connector.Request  | 1 118 |
>187 824 |  >= 33 452 000
> org.apache.catalina.connector.Response | 1 118 |
> 71 552 |  >= 28 233 912
> org.apache.catalina.connector.OutputBuffer | 1 118 |
> 89 440 |  >= 27 898 496
> org.apache.catalina.connector.InputBuffer  | 1 118 |
> 80 496 |  >= 27 270 448
> sun.security.ssl.SSLEngineImpl |   985 |
>133 960 |  >= 25 596 024
> sun.security.ssl.EngineInputRecord |   985 |
> 63 040 |  >= 20 648 288
> org.apache.tomcat.util.buf.ByteChunk   |99 093 |
>  4 756 464 |  >= 15 422 384
> java.lang.String   |   108 196 |
>  2 596 704 |  >= 14 737 456
> org.apache.tomcat.util.buf.MessageBytes|84 554 |
>  4 058 592 |  >= 12 960 440
> java.util.HashMap$Node[]   | 9 139 |
>  1 156 352 |  >= 12 864 216
> java.util.HashMap  |10 997 |
>527 856 |  >= 12 817 352
> java.util.HashMap$Node |56 583 |
>  1 810 656 |  >= 11 484 248
> org.apache.catalina.loader.WebappClassLoader   | 2 |
>272 |  >= 10 199 128
> org.apache.coyote.http11.InternalNioOutputBuffer   | 1 118 |
> 89 440 |   >= 9 811 568
> java.util.concurrent.ConcurrentHashMap | 3 823 |
>244 672 |   >= 9 646 384
> java.util.concurrent.ConcurrentHashMap$Node[]  | 1 295 |
>260 664 |   >= 9 404 616
> java.lang.Class| 9 901 |
> 85 176 |   >= 9 233 664
> java.util.concurrent.ConcurrentHashMap$Node|15 554 |
>497 728 |   >= 9 111 176
> org.apache.tomcat.util.http.MimeHeaders| 2 236 |
> 53 664 |   >= 7 119 880
> org.apache.tomcat.util.http.MimeHeaderField[]  | 2 236 |
>141 248 |   >= 7 066 208
> org.apache.tomcat.util.http.MimeHeaderField|20 201 |
>484 824 |   >= 6 924 960
> org.apache.catalina.loader.ResourceEntry   | 5 133 |
>246 384 |   >= 5 414 616
> java.lang.Object[] |17 925 |
>  1 249 176 |   >= 5 046 960
> java.io.ByteArrayOutputStream  | 3 857 |
> 92 568 |   >= 4 603 096
> org.apache.catalina.webresources.JarResourceSet|46 |
>  3 312 |   >= 4 576 680
> java.text.SimpleDateFormat | 3 413 |
>218 432 |   >= 4 236 656
> java.text.SimpleDateFormat[]   | 1 126 |
> 36 032 |   >= 4 227 008
> sun.security.ssl.HandshakeHash |   985 |
> 39 400 |   >= 3 895 560
> Total: 36 of 9 892 

Re: Instances of org.apache.coyote.RequestInfo accumulate in RequestGroupInfo.processors and eventually result in OOME

2018-03-13 Thread Industrious
Hello, Mark,

Thanks for your attention.

Could you take a look at the class histogram from today's OOME heap dump?
Maybe it could provide some details.
I see a spike in CPU usage at the approximate time the dump was
generated but that might be caused by the garbage collector's futile
attempt to free up memory.

Class Name |   Objects |
Shallow Heap |  Retained Heap
---
   |   |
   |
org.apache.coyote.RequestInfo  | 1 118 |
98 384 | >= 169 690 688
org.apache.coyote.Request  | 1 118 |
   187 824 | >= 169 564 168
byte[] |70 337 |
120 141 712 | >= 120 141 712
org.apache.tomcat.util.net.SecureNioChannel|   985 |
47 280 |  >= 59 649 592
char[] |   128 612 |
55 092 504 |  >= 55 092 504
org.apache.tomcat.util.buf.CharChunk   |89 026 |
 4 273 248 |  >= 41 134 168
java.nio.HeapByteBuffer| 4 256 |
   204 288 |  >= 33 834 864
org.apache.tomcat.util.net.NioEndpoint$NioBufferHandler|   985 |
23 640 |  >= 33 482 120
org.apache.catalina.connector.Request  | 1 118 |
   187 824 |  >= 33 452 000
org.apache.catalina.connector.Response | 1 118 |
71 552 |  >= 28 233 912
org.apache.catalina.connector.OutputBuffer | 1 118 |
89 440 |  >= 27 898 496
org.apache.catalina.connector.InputBuffer  | 1 118 |
80 496 |  >= 27 270 448
sun.security.ssl.SSLEngineImpl |   985 |
   133 960 |  >= 25 596 024
sun.security.ssl.EngineInputRecord |   985 |
63 040 |  >= 20 648 288
org.apache.tomcat.util.buf.ByteChunk   |99 093 |
 4 756 464 |  >= 15 422 384
java.lang.String   |   108 196 |
 2 596 704 |  >= 14 737 456
org.apache.tomcat.util.buf.MessageBytes|84 554 |
 4 058 592 |  >= 12 960 440
java.util.HashMap$Node[]   | 9 139 |
 1 156 352 |  >= 12 864 216
java.util.HashMap  |10 997 |
   527 856 |  >= 12 817 352
java.util.HashMap$Node |56 583 |
 1 810 656 |  >= 11 484 248
org.apache.catalina.loader.WebappClassLoader   | 2 |
   272 |  >= 10 199 128
org.apache.coyote.http11.InternalNioOutputBuffer   | 1 118 |
89 440 |   >= 9 811 568
java.util.concurrent.ConcurrentHashMap | 3 823 |
   244 672 |   >= 9 646 384
java.util.concurrent.ConcurrentHashMap$Node[]  | 1 295 |
   260 664 |   >= 9 404 616
java.lang.Class| 9 901 |
85 176 |   >= 9 233 664
java.util.concurrent.ConcurrentHashMap$Node|15 554 |
   497 728 |   >= 9 111 176
org.apache.tomcat.util.http.MimeHeaders| 2 236 |
53 664 |   >= 7 119 880
org.apache.tomcat.util.http.MimeHeaderField[]  | 2 236 |
   141 248 |   >= 7 066 208
org.apache.tomcat.util.http.MimeHeaderField|20 201 |
   484 824 |   >= 6 924 960
org.apache.catalina.loader.ResourceEntry   | 5 133 |
   246 384 |   >= 5 414 616
java.lang.Object[] |17 925 |
 1 249 176 |   >= 5 046 960
java.io.ByteArrayOutputStream  | 3 857 |
92 568 |   >= 4 603 096
org.apache.catalina.webresources.JarResourceSet|46 |
 3 312 |   >= 4 576 680
java.text.SimpleDateFormat | 3 413 |
   218 432 |   >= 4 236 656
java.text.SimpleDateFormat[]   | 1 126 |
36 032 |   >= 4 227 008
sun.security.ssl.HandshakeHash |   985 |
39 400 |   >= 3 895 560
Total: 36 of 9 892 entries; 9 856 more | 1 227 954 |
219 995 336 |
---

2018-03-05 16:59 GMT+02:00 Mark Thomas :
> That strikes me as odd. The RequestInfo objects are fairly small. They
> are usually part of a much large group of objects but they should appear
> separately in any analysis.
>
> The number of RequestInfo is limited by maxConnections.

Do you think I should try setting maxConnections to a lower limit
instead of the default 1?

I have configured the executor to maxThreads="20" but that did not help.

Thank you.

>
>> Could you tell me what are these objects for?
>
> Take a look at the Javadoc for the version you are using.
>
>> Should not they be recycled or disposed of?
>
> No. They are re-used across multiple requests allocated to the same
> 

Re: Instances of org.apache.coyote.RequestInfo accumulate in RequestGroupInfo.processors and eventually result in OOME

2018-03-05 Thread Mark Thomas
On 04/03/18 18:14, Industrious wrote:
> Hi Mark,
> 
> Thanks for advice. However, I actually increased it. I was running on
> about 80MB Xmx at first when I was on 512-600MB VMs. I did not have
> problems for a long time. Eventually, I faced the problem and I
> increased it to 220MB and I saw the same picture that the largest part
> of the memory is filled with RequestInfo.

That strikes me as odd. The RequestInfo objects are fairly small. They
are usually part of a much large group of objects but they should appear
separately in any analysis.

The number of RequestInfo is limited by maxConnections.

> Could you tell me what are these objects for?

Take a look at the Javadoc for the version you are using.

> Should not they be recycled or disposed of?

No. They are re-used across multiple requests allocated to the same
Processor.

Mark



> 
> 2018-03-01 19:56 GMT+02:00 Mark Thomas :
>> On 24/02/18 18:52, Industrious wrote:
>>> Dear All,
>>>
>>> I am running Tomcat 8 on Ubuntu. After a few days of running
>>> successfully my Tomcat's JVM crashes or becomes absolutely unresponsive
>>> because of OOME errors similar to this in catalina.out:
>>
>> 
>>
>>> At first I was using GCE micro instance with 600MB of RAM. Soon I
>>> started having problems when Linux was killing Tomcat (OOM killer) as it
>>> considered its JVM as a candidate for killing or if I specify a lower
>>> maximum heap size I would eventually get OOME from JVM.
>>>
>>> I moved to a GCE small instance which has 1.7GB of RAM but the problem
>>> still occurs eventually though a bit later than on the 600MB VM because
>>> I somewhat increased the maximum heap size.
>>
>> 
>>
>>> $ less /var/lib/tomcat8/bin/setenv.sh
>>> JAVA_OPTS="-Dlog4j.logging.dir=$CATALINA_BASE/logs \
>>> -Dlogging.dir=$CATALINA_BASE/logs \
>>> -Djava.awt.headless=true \
>>> -Xms220M \
>>> -Xmx220M \
>>> -Dsun.java2d.cmm=sun.java2d.cmm.kcms.KcmsServiceProvider \
>>> -Djava.security.egd=file:/dev/./urandom \
>>> -XX:+HeapDumpOnOutOfMemoryError \
>>> -XX:HeapDumpPath=$CATALINA_BASE/logs/java_pid%p.hprof"
>>
>>
>> Looks like you need to increase Xmx and use some of that extra memory
>> you paid for.
>>
>> Mark
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>> For additional commands, e-mail: users-h...@tomcat.apache.org
>>
> 
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
> 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Instances of org.apache.coyote.RequestInfo accumulate in RequestGroupInfo.processors and eventually result in OOME

2018-03-04 Thread Industrious
Hi Mark,

Thanks for advice. However, I actually increased it. I was running on
about 80MB Xmx at first when I was on 512-600MB VMs. I did not have
problems for a long time. Eventually, I faced the problem and I
increased it to 220MB and I saw the same picture that the largest part
of the memory is filled with RequestInfo.

Could you tell me what are these objects for? Should not they be
recycled or disposed of?

2018-03-01 19:56 GMT+02:00 Mark Thomas :
> On 24/02/18 18:52, Industrious wrote:
>> Dear All,
>>
>> I am running Tomcat 8 on Ubuntu. After a few days of running
>> successfully my Tomcat's JVM crashes or becomes absolutely unresponsive
>> because of OOME errors similar to this in catalina.out:
>
> 
>
>> At first I was using GCE micro instance with 600MB of RAM. Soon I
>> started having problems when Linux was killing Tomcat (OOM killer) as it
>> considered its JVM as a candidate for killing or if I specify a lower
>> maximum heap size I would eventually get OOME from JVM.
>>
>> I moved to a GCE small instance which has 1.7GB of RAM but the problem
>> still occurs eventually though a bit later than on the 600MB VM because
>> I somewhat increased the maximum heap size.
>
> 
>
>> $ less /var/lib/tomcat8/bin/setenv.sh
>> JAVA_OPTS="-Dlog4j.logging.dir=$CATALINA_BASE/logs \
>> -Dlogging.dir=$CATALINA_BASE/logs \
>> -Djava.awt.headless=true \
>> -Xms220M \
>> -Xmx220M \
>> -Dsun.java2d.cmm=sun.java2d.cmm.kcms.KcmsServiceProvider \
>> -Djava.security.egd=file:/dev/./urandom \
>> -XX:+HeapDumpOnOutOfMemoryError \
>> -XX:HeapDumpPath=$CATALINA_BASE/logs/java_pid%p.hprof"
>
>
> Looks like you need to increase Xmx and use some of that extra memory
> you paid for.
>
> Mark
>
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Instances of org.apache.coyote.RequestInfo accumulate in RequestGroupInfo.processors and eventually result in OOME

2018-03-01 Thread Mark Thomas
On 24/02/18 18:52, Industrious wrote:
> Dear All,
> 
> I am running Tomcat 8 on Ubuntu. After a few days of running
> successfully my Tomcat's JVM crashes or becomes absolutely unresponsive
> because of OOME errors similar to this in catalina.out:



> At first I was using GCE micro instance with 600MB of RAM. Soon I
> started having problems when Linux was killing Tomcat (OOM killer) as it
> considered its JVM as a candidate for killing or if I specify a lower
> maximum heap size I would eventually get OOME from JVM.
> 
> I moved to a GCE small instance which has 1.7GB of RAM but the problem
> still occurs eventually though a bit later than on the 600MB VM because
> I somewhat increased the maximum heap size.



> $ less /var/lib/tomcat8/bin/setenv.sh
> JAVA_OPTS="-Dlog4j.logging.dir=$CATALINA_BASE/logs \
> -Dlogging.dir=$CATALINA_BASE/logs \
> -Djava.awt.headless=true \
> -Xms220M \
> -Xmx220M \
> -Dsun.java2d.cmm=sun.java2d.cmm.kcms.KcmsServiceProvider \
> -Djava.security.egd=file:/dev/./urandom \
> -XX:+HeapDumpOnOutOfMemoryError \
> -XX:HeapDumpPath=$CATALINA_BASE/logs/java_pid%p.hprof"


Looks like you need to increase Xmx and use some of that extra memory
you paid for.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Instances of org.apache.coyote.RequestInfo accumulate in RequestGroupInfo.processors and eventually result in OOME

2018-03-01 Thread Greg Huber
I found tomcat was being killed randomly by oom when it should have been
the virus scanner stuff, I had to reduce the threads as it was consuming
too much memory.

/etc/clamd.d/scan.conf

#MaxThreads 20
MaxThreads 5

I found this by monitoring the output from 'top' command.

Cheers Greg

On 1 March 2018 at 11:10, Industrious  wrote:

> Dear All,
>
> In case it is a duplicate I am sorry I had to send this message again
> because it seems it has not arrived to anybody and I have not got in
> my inbox too though it appeared in online archives.
>
> I am running Tomcat 8 on Ubuntu. After a few days of running
> successfully my Tomcat's JVM crashes or becomes absolutely
> unresponsive because of OOME errors similar to this in catalina.out:
> --- cut ---
> SEVERE:Memory usage is low, parachute is non existent, your system may
> start failing.
> java.lang.OutOfMemoryError: Java heap space
> SEVERE:Memory usage is low, parachute is non existent, your system may
> start failing.
> java.lang.OutOfMemoryError: Java heap space
> SEVERE:Memory usage is low, parachute is non existent, your system may
> start failing.
> java.lang.OutOfMemoryError: Java heap space
> SEVERE:Memory usage is low, parachute is non existent, your system may
> start failing.
> java.lang.OutOfMemoryError: Java heap space
> SEVERE:Memory usage is low, parachute is non existent, your system may
> start failing.
> java.lang.OutOfMemoryError: Java heap space
> SEVERE:Memory usage is low, parachute is non existent, your system may
> start failing.
> java.lang.OutOfMemoryError: Java heap space: failed reallocation of
> scalar replaced objects
>
> Exception: java.lang.OutOfMemoryError thrown from the
> UncaughtExceptionHandler in thread "http-nio-80-exec-2"
> --- cut ---
>
> I was using a VM with 600MB of RAM. Soon I started having problems
> when Linux was killing Tomcat (OOM killer) as it considered its JVM as
> a candidate for killing or if I specify a lower maximum heap size I
> would eventually get OOME from JVM. I moved to a VM with 1.7GB of RAM
> but the problem still occurs eventually though a bit later than on the
> 600MB VM because I somewhat increased the maximum heap size.
>
> It is strange because my webapp used to run fine on Openshift 512MB
> memory VM though it was JRE 1.7 and Tomcat 7. I did not make any
> considerable changes to server.xml (with the exception of those
> necessary to use Tomcat 8) or app itself, I attach my server.xml too.
>
> I have read Tomcat's FAQ, Wiki pages and searched mailing lists but
> nothing seems to fit my case.
>
> I would accept the fact that my application has a memory leak problem
> but it is absolutely unclear from this picture that it has one.
>
> I have tried using Eclipse MAT to analyze memory dumps and to see what
> might be causing this and it turned out that the majority of heap >
> 75% is occupied by RequestInfo instances. I would be really grateful
> if someone could tell me what might be happening there.
>
> Looking forward to your reply.
>
> Best regards,
> Ivan
>
> 
> =
> My current environment:
> VM with 1 CPU which has 1.7GB RAM
>
> Tomcat 8.0.32 (8.0.32-1ubuntu1.5) on Ubuntu 16.04.1.
>
> $ java -version
> openjdk version "1.8.0_151"
> OpenJDK Runtime Environment (build 1.8.0_151-8u151-b12-0ubuntu0.
> 16.04.2-b12)
> OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
>
> $ less /var/lib/tomcat8/bin/setenv.sh
> JAVA_OPTS="-Dlog4j.logging.dir=$CATALINA_BASE/logs \
> -Dlogging.dir=$CATALINA_BASE/logs \
> -Djava.awt.headless=true \
> -Xms220M \
> -Xmx220M \
> -Dsun.java2d.cmm=sun.java2d.cmm.kcms.KcmsServiceProvider \
> -Djava.security.egd=file:/dev/./urandom \
> -XX:+HeapDumpOnOutOfMemoryError \
> -XX:HeapDumpPath=$CATALINA_BASE/logs/java_pid%p.hprof"
>
>
> Eclipse MAT report
> --
> 1,121 instances of "org.apache.coyote.RequestInfo", loaded by
> "java.net.URLClassLoader @ 0xf7202b18" occupy 170,358,720 (77.39%)
> bytes. These instances are referenced from one instance of
> "java.lang.Object[]", loaded by "" - 199MB out of
> 209MB
>
> server.xml
> --
> 
> 
>/>
>
>minimumUmask="" />
>SSLEngine="on" />
>/>
>/>
>/>
>
>   
>
>  maxThreads="20" minSpareThreads="4"/>
>
> connectionTimeout="3000"
>redirectPort="443" executor="tomcatThreadPool" />
>
> maxThreads="150" SSLEnabled="true" scheme="https"
> secure="true"
>clientAuth="false" sslProtocol="TLS"
> executor="tomcatThreadPool"
>KeystoreFile="***" KeystorePass="***"/>
>
> 
>
>unpackWARs="true" autoDeploy="true">
>
>  protocolHeader="x-forwarded-proto" >
>
>
> 
>  directory="logs"
>prefix="localhost_access_log." suffix=".log"
> requestAttributesEnabled="true"
>   

Re: Instances of org.apache.coyote.RequestInfo accumulate in RequestGroupInfo.processors and eventually result in OOME

2018-03-01 Thread Industrious
Thanks.

2018-03-01 13:18 GMT+02:00 Mark Thomas :
> On 01/03/18 11:10, Industrious wrote:
>> Dear All,
>>
>> In case it is a duplicate I am sorry I had to send this message again
>
> Please don't do that.
>
>> because it seems it has not arrived to anybody
>
> If you haven't received a reply, the most likely reasons are:
> - no-one knew the answer;
> - the question was poorly written;
> - the person that knows the answer hasn't had time to reply yet.
>
> Re-sending it won't help.
>
>> and I have not got in my inbox too
>
> That is a 'feature' of GMail.
>
>> though it appeared in online archives.
>
> Which means it arrived.
>
> Mark
>
>
>>
>> I am running Tomcat 8 on Ubuntu. After a few days of running
>> successfully my Tomcat's JVM crashes or becomes absolutely
>> unresponsive because of OOME errors similar to this in catalina.out:
>> --- cut ---
>> SEVERE:Memory usage is low, parachute is non existent, your system may
>> start failing.
>> java.lang.OutOfMemoryError: Java heap space
>> SEVERE:Memory usage is low, parachute is non existent, your system may
>> start failing.
>> java.lang.OutOfMemoryError: Java heap space
>> SEVERE:Memory usage is low, parachute is non existent, your system may
>> start failing.
>> java.lang.OutOfMemoryError: Java heap space
>> SEVERE:Memory usage is low, parachute is non existent, your system may
>> start failing.
>> java.lang.OutOfMemoryError: Java heap space
>> SEVERE:Memory usage is low, parachute is non existent, your system may
>> start failing.
>> java.lang.OutOfMemoryError: Java heap space
>> SEVERE:Memory usage is low, parachute is non existent, your system may
>> start failing.
>> java.lang.OutOfMemoryError: Java heap space: failed reallocation of
>> scalar replaced objects
>>
>> Exception: java.lang.OutOfMemoryError thrown from the
>> UncaughtExceptionHandler in thread "http-nio-80-exec-2"
>> --- cut ---
>>
>> I was using a VM with 600MB of RAM. Soon I started having problems
>> when Linux was killing Tomcat (OOM killer) as it considered its JVM as
>> a candidate for killing or if I specify a lower maximum heap size I
>> would eventually get OOME from JVM. I moved to a VM with 1.7GB of RAM
>> but the problem still occurs eventually though a bit later than on the
>> 600MB VM because I somewhat increased the maximum heap size.
>>
>> It is strange because my webapp used to run fine on Openshift 512MB
>> memory VM though it was JRE 1.7 and Tomcat 7. I did not make any
>> considerable changes to server.xml (with the exception of those
>> necessary to use Tomcat 8) or app itself, I attach my server.xml too.
>>
>> I have read Tomcat's FAQ, Wiki pages and searched mailing lists but
>> nothing seems to fit my case.
>>
>> I would accept the fact that my application has a memory leak problem
>> but it is absolutely unclear from this picture that it has one.
>>
>> I have tried using Eclipse MAT to analyze memory dumps and to see what
>> might be causing this and it turned out that the majority of heap >
>> 75% is occupied by RequestInfo instances. I would be really grateful
>> if someone could tell me what might be happening there.
>>
>> Looking forward to your reply.
>>
>> Best regards,
>> Ivan
>>
>> =
>> My current environment:
>> VM with 1 CPU which has 1.7GB RAM
>>
>> Tomcat 8.0.32 (8.0.32-1ubuntu1.5) on Ubuntu 16.04.1.
>>
>> $ java -version
>> openjdk version "1.8.0_151"
>> OpenJDK Runtime Environment (build 1.8.0_151-8u151-b12-0ubuntu0.16.04.2-b12)
>> OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
>>
>> $ less /var/lib/tomcat8/bin/setenv.sh
>> JAVA_OPTS="-Dlog4j.logging.dir=$CATALINA_BASE/logs \
>> -Dlogging.dir=$CATALINA_BASE/logs \
>> -Djava.awt.headless=true \
>> -Xms220M \
>> -Xmx220M \
>> -Dsun.java2d.cmm=sun.java2d.cmm.kcms.KcmsServiceProvider \
>> -Djava.security.egd=file:/dev/./urandom \
>> -XX:+HeapDumpOnOutOfMemoryError \
>> -XX:HeapDumpPath=$CATALINA_BASE/logs/java_pid%p.hprof"
>>
>>
>> Eclipse MAT report
>> --
>> 1,121 instances of "org.apache.coyote.RequestInfo", loaded by
>> "java.net.URLClassLoader @ 0xf7202b18" occupy 170,358,720 (77.39%)
>> bytes. These instances are referenced from one instance of
>> "java.lang.Object[]", loaded by "" - 199MB out of
>> 209MB
>>
>> server.xml
>> --
>> 
>> 
>>   
>>
>>   > minimumUmask="" />
>>   > SSLEngine="on" />
>>   > className="org.apache.catalina.core.JreMemoryLeakPreventionListener"
>> />
>>   > className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"
>> />
>>   > className="org.apache.catalina.core.ThreadLocalLeakPreventionListener"
>> />
>>
>>   
>>
>> > maxThreads="20" minSpareThreads="4"/>
>>
>> >connectionTimeout="3000"
>>redirectPort="443" executor="tomcatThreadPool" />
>>
>> > protocol="org.apache.coyote.http11.Http11NioProtocol"
>>maxThreads="150" 

Re: Instances of org.apache.coyote.RequestInfo accumulate in RequestGroupInfo.processors and eventually result in OOME

2018-03-01 Thread Mark Thomas
On 01/03/18 11:10, Industrious wrote:
> Dear All,
> 
> In case it is a duplicate I am sorry I had to send this message again

Please don't do that.

> because it seems it has not arrived to anybody

If you haven't received a reply, the most likely reasons are:
- no-one knew the answer;
- the question was poorly written;
- the person that knows the answer hasn't had time to reply yet.

Re-sending it won't help.

> and I have not got in my inbox too

That is a 'feature' of GMail.

> though it appeared in online archives.

Which means it arrived.

Mark


> 
> I am running Tomcat 8 on Ubuntu. After a few days of running
> successfully my Tomcat's JVM crashes or becomes absolutely
> unresponsive because of OOME errors similar to this in catalina.out:
> --- cut ---
> SEVERE:Memory usage is low, parachute is non existent, your system may
> start failing.
> java.lang.OutOfMemoryError: Java heap space
> SEVERE:Memory usage is low, parachute is non existent, your system may
> start failing.
> java.lang.OutOfMemoryError: Java heap space
> SEVERE:Memory usage is low, parachute is non existent, your system may
> start failing.
> java.lang.OutOfMemoryError: Java heap space
> SEVERE:Memory usage is low, parachute is non existent, your system may
> start failing.
> java.lang.OutOfMemoryError: Java heap space
> SEVERE:Memory usage is low, parachute is non existent, your system may
> start failing.
> java.lang.OutOfMemoryError: Java heap space
> SEVERE:Memory usage is low, parachute is non existent, your system may
> start failing.
> java.lang.OutOfMemoryError: Java heap space: failed reallocation of
> scalar replaced objects
> 
> Exception: java.lang.OutOfMemoryError thrown from the
> UncaughtExceptionHandler in thread "http-nio-80-exec-2"
> --- cut ---
> 
> I was using a VM with 600MB of RAM. Soon I started having problems
> when Linux was killing Tomcat (OOM killer) as it considered its JVM as
> a candidate for killing or if I specify a lower maximum heap size I
> would eventually get OOME from JVM. I moved to a VM with 1.7GB of RAM
> but the problem still occurs eventually though a bit later than on the
> 600MB VM because I somewhat increased the maximum heap size.
> 
> It is strange because my webapp used to run fine on Openshift 512MB
> memory VM though it was JRE 1.7 and Tomcat 7. I did not make any
> considerable changes to server.xml (with the exception of those
> necessary to use Tomcat 8) or app itself, I attach my server.xml too.
> 
> I have read Tomcat's FAQ, Wiki pages and searched mailing lists but
> nothing seems to fit my case.
> 
> I would accept the fact that my application has a memory leak problem
> but it is absolutely unclear from this picture that it has one.
> 
> I have tried using Eclipse MAT to analyze memory dumps and to see what
> might be causing this and it turned out that the majority of heap >
> 75% is occupied by RequestInfo instances. I would be really grateful
> if someone could tell me what might be happening there.
> 
> Looking forward to your reply.
> 
> Best regards,
> Ivan
> 
> =
> My current environment:
> VM with 1 CPU which has 1.7GB RAM
> 
> Tomcat 8.0.32 (8.0.32-1ubuntu1.5) on Ubuntu 16.04.1.
> 
> $ java -version
> openjdk version "1.8.0_151"
> OpenJDK Runtime Environment (build 1.8.0_151-8u151-b12-0ubuntu0.16.04.2-b12)
> OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
> 
> $ less /var/lib/tomcat8/bin/setenv.sh
> JAVA_OPTS="-Dlog4j.logging.dir=$CATALINA_BASE/logs \
> -Dlogging.dir=$CATALINA_BASE/logs \
> -Djava.awt.headless=true \
> -Xms220M \
> -Xmx220M \
> -Dsun.java2d.cmm=sun.java2d.cmm.kcms.KcmsServiceProvider \
> -Djava.security.egd=file:/dev/./urandom \
> -XX:+HeapDumpOnOutOfMemoryError \
> -XX:HeapDumpPath=$CATALINA_BASE/logs/java_pid%p.hprof"
> 
> 
> Eclipse MAT report
> --
> 1,121 instances of "org.apache.coyote.RequestInfo", loaded by
> "java.net.URLClassLoader @ 0xf7202b18" occupy 170,358,720 (77.39%)
> bytes. These instances are referenced from one instance of
> "java.lang.Object[]", loaded by "" - 199MB out of
> 209MB
> 
> server.xml
> --
> 
> 
>   
> 
>minimumUmask="" />
>SSLEngine="on" />
>className="org.apache.catalina.core.JreMemoryLeakPreventionListener"
> />
>className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"
> />
>className="org.apache.catalina.core.ThreadLocalLeakPreventionListener"
> />
> 
>   
> 
>  maxThreads="20" minSpareThreads="4"/>
> 
> connectionTimeout="3000"
>redirectPort="443" executor="tomcatThreadPool" />
> 
>  protocol="org.apache.coyote.http11.Http11NioProtocol"
>maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
>clientAuth="false" sslProtocol="TLS" 
> executor="tomcatThreadPool"
>KeystoreFile="***" KeystorePass="***"/>
> 
> 
> 
>

Instances of org.apache.coyote.RequestInfo accumulate in RequestGroupInfo.processors and eventually result in OOME

2018-03-01 Thread Industrious
Dear All,

In case it is a duplicate I am sorry I had to send this message again
because it seems it has not arrived to anybody and I have not got in
my inbox too though it appeared in online archives.

I am running Tomcat 8 on Ubuntu. After a few days of running
successfully my Tomcat's JVM crashes or becomes absolutely
unresponsive because of OOME errors similar to this in catalina.out:
--- cut ---
SEVERE:Memory usage is low, parachute is non existent, your system may
start failing.
java.lang.OutOfMemoryError: Java heap space
SEVERE:Memory usage is low, parachute is non existent, your system may
start failing.
java.lang.OutOfMemoryError: Java heap space
SEVERE:Memory usage is low, parachute is non existent, your system may
start failing.
java.lang.OutOfMemoryError: Java heap space
SEVERE:Memory usage is low, parachute is non existent, your system may
start failing.
java.lang.OutOfMemoryError: Java heap space
SEVERE:Memory usage is low, parachute is non existent, your system may
start failing.
java.lang.OutOfMemoryError: Java heap space
SEVERE:Memory usage is low, parachute is non existent, your system may
start failing.
java.lang.OutOfMemoryError: Java heap space: failed reallocation of
scalar replaced objects

Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread "http-nio-80-exec-2"
--- cut ---

I was using a VM with 600MB of RAM. Soon I started having problems
when Linux was killing Tomcat (OOM killer) as it considered its JVM as
a candidate for killing or if I specify a lower maximum heap size I
would eventually get OOME from JVM. I moved to a VM with 1.7GB of RAM
but the problem still occurs eventually though a bit later than on the
600MB VM because I somewhat increased the maximum heap size.

It is strange because my webapp used to run fine on Openshift 512MB
memory VM though it was JRE 1.7 and Tomcat 7. I did not make any
considerable changes to server.xml (with the exception of those
necessary to use Tomcat 8) or app itself, I attach my server.xml too.

I have read Tomcat's FAQ, Wiki pages and searched mailing lists but
nothing seems to fit my case.

I would accept the fact that my application has a memory leak problem
but it is absolutely unclear from this picture that it has one.

I have tried using Eclipse MAT to analyze memory dumps and to see what
might be causing this and it turned out that the majority of heap >
75% is occupied by RequestInfo instances. I would be really grateful
if someone could tell me what might be happening there.

Looking forward to your reply.

Best regards,
Ivan

=
My current environment:
VM with 1 CPU which has 1.7GB RAM

Tomcat 8.0.32 (8.0.32-1ubuntu1.5) on Ubuntu 16.04.1.

$ java -version
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-8u151-b12-0ubuntu0.16.04.2-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)

$ less /var/lib/tomcat8/bin/setenv.sh
JAVA_OPTS="-Dlog4j.logging.dir=$CATALINA_BASE/logs \
-Dlogging.dir=$CATALINA_BASE/logs \
-Djava.awt.headless=true \
-Xms220M \
-Xmx220M \
-Dsun.java2d.cmm=sun.java2d.cmm.kcms.KcmsServiceProvider \
-Djava.security.egd=file:/dev/./urandom \
-XX:+HeapDumpOnOutOfMemoryError \
-XX:HeapDumpPath=$CATALINA_BASE/logs/java_pid%p.hprof"


Eclipse MAT report
--
1,121 instances of "org.apache.coyote.RequestInfo", loaded by
"java.net.URLClassLoader @ 0xf7202b18" occupy 170,358,720 (77.39%)
bytes. These instances are referenced from one instance of
"java.lang.Object[]", loaded by "" - 199MB out of
209MB

server.xml
--


  

  
  
  
  
  

  









  







  

  


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Instances of org.apache.coyote.RequestInfo accumulate in RequestGroupInfo.processors and eventually result in OOME

2018-02-24 Thread Industrious
Dear All,

I am running Tomcat 8 on Ubuntu. After a few days of running successfully
my Tomcat's JVM crashes or becomes absolutely unresponsive because of OOME
errors similar to this in catalina.out:
--- cut ---
SEVERE:Memory usage is low, parachute is non existent, your system may
start failing.
java.lang.OutOfMemoryError: Java heap space
SEVERE:Memory usage is low, parachute is non existent, your system may
start failing.
java.lang.OutOfMemoryError: Java heap space
SEVERE:Memory usage is low, parachute is non existent, your system may
start failing.
java.lang.OutOfMemoryError: Java heap space
SEVERE:Memory usage is low, parachute is non existent, your system may
start failing.
java.lang.OutOfMemoryError: Java heap space
SEVERE:Memory usage is low, parachute is non existent, your system may
start failing.
java.lang.OutOfMemoryError: Java heap space
SEVERE:Memory usage is low, parachute is non existent, your system may
start failing.
java.lang.OutOfMemoryError: Java heap space: failed reallocation of scalar
replaced objects

Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread "http-nio-80-exec-2"
--- cut ---

My webapp used to run on Openshift v2. It only supported JRE 1.7 and some
Tomcat 7. Openshift v2 closed and I have moved to Google Compute Engine and
has been struggling since then running Tomcat 8 there.

At first I was using GCE micro instance with 600MB of RAM. Soon I started
having problems when Linux was killing Tomcat (OOM killer) as it considered
its JVM as a candidate for killing or if I specify a lower maximum heap
size I would eventually get OOME from JVM.

I moved to a GCE small instance which has 1.7GB of RAM but the problem
still occurs eventually though a bit later than on the 600MB VM because I
somewhat increased the maximum heap size.

It is strange because my webapp was running fine on Openshift though it was
Tomcat 7. I did not make any considerable changes to server.xml (with the
exception of those necessary to use Tomcat 8), I attach my server.xml too.

I read Tomcat's FAQ, Wiki pages and searched mailing lists but nothing
seems to fit my case.

I would accept the fact that my application has a memory leak problem but
it is absolutely unclear from this picture that it has one.

I have tried using Eclipse MAT to see what might be causing this and I
attach 2 screenshots that show that the majority of heap is occupied by
RequestInfo instances. I would be really grateful if someone could take a
look and tell me what might be happening there.

Looking forward to your reply.

Best regards,
Ivan

=
My current environment:
g1-small 1 CPU which has 1.7GB RAM

Tomcat 8.0.32 (8.0.32-1ubuntu1.5) on Ubuntu 16.04.1.

$ java -version
openjdk version "1.8.0_151"
OpenJDK Runtime Environment (build 1.8.0_151-8u151-b12-0ubuntu0.16.04.2-b12)
OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)

$ less /var/lib/tomcat8/bin/setenv.sh
JAVA_OPTS="-Dlog4j.logging.dir=$CATALINA_BASE/logs \
-Dlogging.dir=$CATALINA_BASE/logs \
-Djava.awt.headless=true \
-Xms220M \
-Xmx220M \
-Dsun.java2d.cmm=sun.java2d.cmm.kcms.KcmsServiceProvider \
-Djava.security.egd=file:/dev/./urandom \
-XX:+HeapDumpOnOutOfMemoryError \
-XX:HeapDumpPath=$CATALINA_BASE/logs/java_pid%p.hprof"


  
  
  
  
  
  
  

  


   





  


   


  

  


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org