Hello, Mark,
Thanks for your attention.
Could you take a look at the class histogram from today's OOME heap dump?
Maybe it could provide some details.
I see a spike in CPU usage at the approximate time the dump was
generated but that might be caused by the garbage collector's futile
attempt to free up memory.
Class Name | Objects |
Shallow Heap | Retained Heap
---------------------------------------------------------------------------------------------------
| |
|
org.apache.coyote.RequestInfo | 1 118 |
98 384 | >= 169 690 688
org.apache.coyote.Request | 1 118 |
187 824 | >= 169 564 168
byte[] | 70 337 |
120 141 712 | >= 120 141 712
org.apache.tomcat.util.net.SecureNioChannel | 985 |
47 280 | >= 59 649 592
char[] | 128 612 |
55 092 504 | >= 55 092 504
org.apache.tomcat.util.buf.CharChunk | 89 026 |
4 273 248 | >= 41 134 168
java.nio.HeapByteBuffer | 4 256 |
204 288 | >= 33 834 864
org.apache.tomcat.util.net.NioEndpoint$NioBufferHandler| 985 |
23 640 | >= 33 482 120
org.apache.catalina.connector.Request | 1 118 |
187 824 | >= 33 452 000
org.apache.catalina.connector.Response | 1 118 |
71 552 | >= 28 233 912
org.apache.catalina.connector.OutputBuffer | 1 118 |
89 440 | >= 27 898 496
org.apache.catalina.connector.InputBuffer | 1 118 |
80 496 | >= 27 270 448
sun.security.ssl.SSLEngineImpl | 985 |
133 960 | >= 25 596 024
sun.security.ssl.EngineInputRecord | 985 |
63 040 | >= 20 648 288
org.apache.tomcat.util.buf.ByteChunk | 99 093 |
4 756 464 | >= 15 422 384
java.lang.String | 108 196 |
2 596 704 | >= 14 737 456
org.apache.tomcat.util.buf.MessageBytes | 84 554 |
4 058 592 | >= 12 960 440
java.util.HashMap$Node[] | 9 139 |
1 156 352 | >= 12 864 216
java.util.HashMap | 10 997 |
527 856 | >= 12 817 352
java.util.HashMap$Node | 56 583 |
1 810 656 | >= 11 484 248
org.apache.catalina.loader.WebappClassLoader | 2 |
272 | >= 10 199 128
org.apache.coyote.http11.InternalNioOutputBuffer | 1 118 |
89 440 | >= 9 811 568
java.util.concurrent.ConcurrentHashMap | 3 823 |
244 672 | >= 9 646 384
java.util.concurrent.ConcurrentHashMap$Node[] | 1 295 |
260 664 | >= 9 404 616
java.lang.Class | 9 901 |
85 176 | >= 9 233 664
java.util.concurrent.ConcurrentHashMap$Node | 15 554 |
497 728 | >= 9 111 176
org.apache.tomcat.util.http.MimeHeaders | 2 236 |
53 664 | >= 7 119 880
org.apache.tomcat.util.http.MimeHeaderField[] | 2 236 |
141 248 | >= 7 066 208
org.apache.tomcat.util.http.MimeHeaderField | 20 201 |
484 824 | >= 6 924 960
org.apache.catalina.loader.ResourceEntry | 5 133 |
246 384 | >= 5 414 616
java.lang.Object[] | 17 925 |
1 249 176 | >= 5 046 960
java.io.ByteArrayOutputStream | 3 857 |
92 568 | >= 4 603 096
org.apache.catalina.webresources.JarResourceSet | 46 |
3 312 | >= 4 576 680
java.text.SimpleDateFormat | 3 413 |
218 432 | >= 4 236 656
java.text.SimpleDateFormat[] | 1 126 |
36 032 | >= 4 227 008
sun.security.ssl.HandshakeHash | 985 |
39 400 | >= 3 895 560
Total: 36 of 9 892 entries; 9 856 more | 1 227 954 |
219 995 336 |
---------------------------------------------------------------------------------------------------
2018-03-05 16:59 GMT+02:00 Mark Thomas <[email protected]>:
> That strikes me as odd. The RequestInfo objects are fairly small. They
> are usually part of a much large group of objects but they should appear
> separately in any analysis.
>
> The number of RequestInfo is limited by maxConnections.
Do you think I should try setting maxConnections to a lower limit
instead of the default 10000?
I have configured the executor to maxThreads="20" but that did not help.
Thank you.
>
>> Could you tell me what are these objects for?
>
> Take a look at the Javadoc for the version you are using.
>
>> Should not they be recycled or disposed of?
>
> No. They are re-used across multiple requests allocated to the same
> Processor.
>
> Mark
>
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]