Re: GC allocation failure

2018-01-11 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Suvendu,

On 1/11/18 10:06 AM, Suvendu Sekhar Mondal wrote:
> On Tue, Jan 9, 2018 at 10:53 AM, Leon Rosenberg 
>  wrote:
>> On Mon, Jan 8, 2018 at 10:26 AM, Mark Thomas 
>> wrote:
>> 
>>> On 08/01/18 15:16, Christopher Schultz wrote:
>>> 
>>> 
>>> 
> Therefore, the first time that the GC runs, the process can
> take longer. Also, the heap is more likely to be fragmented
> and require a heap compaction. To avoid that, till now my
> strategy is to: - Start application with the minimum heap
> size that application requires - When the GC starts up, it
> runs frequently and efficiently because the heap is small
 
 I think this is a reasonable expectation for someone who
 doesn't understand the Black Art of Garbage Collection, but
 I'm not sure it's actually true. I'm not claiming that I know
 any better than you do, but I suspect that the collector
 takes its parameters very seriously, and when you introduce
 artificial constraints (such as a smaller minimum heap size),
 the GC will attempt to respect those constraints. The reality
 is that those constraints are completely unnecessary; you 
 have only imposed them because you think you know better than
 the GC algorithm.
> 
> Thank you for all your response.
> 
> Well, most of our clients are running on IBM J9 JVM and that is
> what IBM recommends :): 
> https://www.ibm.com/support/knowledgecenter/SSYKE2_9.0.0/com.ibm.java.
multiplatform.90.doc/diag/understanding/mm_heapsizing_initial.html
>
>  We have started moving our clients from WAS to Tomcat+ HotSpot
> JDK8 platform - that's why I am here, learning about it and
> throwing questions :).
> 
> One thing about memory allocation by OS: if I setup different
> values for initial and max, then after starting up the JVM,
> Windows *reserves* the max amount of memory exclusively for the
> JVM. I get that using Private Bytes counter. So that's why I
> believe there is no chance of OOM at OS level. What I am more
> interested is about the cost of heap expansion in HotSpot JVM.

The choice to pre-allocate / reserve the maximum possible heap space
is up to both the JVM and the OS. On Linux, for example, requesting
100GiB on a 64-bit platform will work right away, because Linux
doesn't actually allocate memory to the process until the process
actually tries to use it. That's why the "Linux OOM killer" is such a
big deal: the process can try to avoid it by "pre-allocating" memory,
but if it doesn't do it right, the process can still fail later when
it tries to actually use that memory.

It seems that Windows may be a little more respectful of the process's
memory requests.

In either case, pre-allocation of memory (from the OS) is not the same
thing as sizing the heap to fully-populate that memory. So, Oracle JVM
may pre-allocate the "max heap size" from the OS on startup, but then
will only size the heap to the "min heap size". Any time the JVM
decides that the heap has to grow, it will have to do a lot of work to
perform that resizing operation.

>>> Generally, the more memory available, the more efficient GC is.
>>> The general rule is you can optimise for any two of the
>>> following at the expense of the third: - low pause time - high
>>> throughout - low memory usage
>>> 
>>> It has been a few years since I listened to the experts talk
>>> about it but a good rule of thumb used to be that you should
>>> size your heap 3-5 times bigger than the minimum heap used once
>>> the application memory usages reaches steady state (i.e. the
>>> minimum value of the sawtooth on the heap usage graph)
>>> 
>>> 
>> Actually G1, which is very usable with java8 and default in jdk9,
>> doesn't produce the sawtooth graph anymore. I also think the
>> amount of memory has less influence on GC Performance in G1 or
>> Shenandoah, but instead influence if they would perform a STW
>> phase (which of course is also performance related, but
>> differently). But I am not an expert either, so I might be wrong
>> here.
>> 
>> As for OP's original statement: "When the GC starts up, it runs
>> frequently and efficiently because the heap is small", I don't
>> think it is correct anymore, especially not for G1, as long as
>> the object size is reasonable (not Humongous).
>> 
>> 
>> Leon
> 
> Yes Leon, we are seeing that G1 is works best for our app. We have 
> some large objects and we can't reduce the size immediately. So we 
> have decided to increase G1 region size for the time being and 
> collecting dead Humongous objects during Young collections.

- -chris
-BEGIN PGP SIGNATURE-
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQJRBAEBCAA7FiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlpXi9gdHGNocmlzQGNo
cmlzdG9waGVyc2NodWx0ei5uZXQACgkQHPApP6U8pFjTnw/6A0b2B4dk1sGNgUhT
8IaZ/xRaLQAHGuYzZ7WjQfNBEfaTYqwx9+to0sSCz1UEpdtHFvmiEZ9G0QkytXTS

Re: GC allocation failure

2018-01-11 Thread Suvendu Sekhar Mondal
On Tue, Jan 9, 2018 at 10:53 AM, Leon Rosenberg
 wrote:
> On Mon, Jan 8, 2018 at 10:26 AM, Mark Thomas  wrote:
>
>> On 08/01/18 15:16, Christopher Schultz wrote:
>>
>> 
>>
>> >> Therefore, the first time that the GC runs, the process can take
>> >> longer. Also, the heap is more likely to be fragmented and require
>> >> a heap compaction. To avoid that, till now my strategy is to: -
>> >> Start application with the minimum heap size that application
>> >> requires - When the GC starts up, it runs frequently and
>> >> efficiently because the heap is small
>> >
>> > I think this is a reasonable expectation for someone who doesn't
>> > understand the Black Art of Garbage Collection, but I'm not sure it's
>> > actually true. I'm not claiming that I know any better than you do,
>> > but I suspect that the collector takes its parameters very seriously,
>> > and when you introduce artificial constraints (such as a smaller
>> > minimum heap size), the GC will attempt to respect those constraints.
>> > The reality is that those constraints are completely unnecessary; you
>> > have only imposed them because you think you know better than the GC
>> > algorithm.

Thank you for all your response.

Well, most of our clients are running on IBM J9 JVM and that is what
IBM recommends :):
https://www.ibm.com/support/knowledgecenter/SSYKE2_9.0.0/com.ibm.java.multiplatform.90.doc/diag/understanding/mm_heapsizing_initial.html

We have started moving our clients from WAS to Tomcat+ HotSpot JDK8
platform - that's why I am here, learning about it and throwing
questions :).

One thing about memory allocation by OS: if I setup different values
for initial and max, then after starting up the JVM, Windows
*reserves* the max amount of memory exclusively for the JVM. I get
that using Private Bytes counter. So that's why I believe there is no
chance of OOM at OS level. What I am more interested is about the cost
of heap expansion in HotSpot JVM.

>> Generally, the more memory available, the more efficient GC is. The
>> general rule is you can optimise for any two of the following at the
>> expense of the third:
>> - low pause time
>> - high throughout
>> - low memory usage
>>
>> It has been a few years since I listened to the experts talk about it
>> but a good rule of thumb used to be that you should size your heap 3-5
>> times bigger than the minimum heap used once the application memory
>> usages reaches steady state (i.e. the minimum value of the sawtooth on
>> the heap usage graph)
>>
>>
> Actually G1, which is very usable with java8 and default in jdk9, doesn't
> produce the sawtooth graph anymore.
> I also think the amount of memory has less influence on GC Performance in
> G1 or Shenandoah, but instead influence if they would perform a STW phase
> (which of course is also performance related, but differently).
> But I am not an expert either, so I might be wrong here.
>
> As for OP's original statement: "When the GC starts up, it runs frequently
> and
> efficiently because the heap is small", I don't think it is correct
> anymore, especially not for G1, as long as the object size is reasonable
> (not Humongous).
>
>
> Leon

Yes Leon, we are seeing that G1 is works best for our app. We have
some large objects and we can't reduce the size immediately. So we
have decided to increase G1 region size for the time being and
collecting dead Humongous objects during Young collections.


Thanks!
Suvendu

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: GC allocation failure

2018-01-08 Thread Leon Rosenberg
On Mon, Jan 8, 2018 at 10:26 AM, Mark Thomas  wrote:

> On 08/01/18 15:16, Christopher Schultz wrote:
>
> 
>
> >> Therefore, the first time that the GC runs, the process can take
> >> longer. Also, the heap is more likely to be fragmented and require
> >> a heap compaction. To avoid that, till now my strategy is to: -
> >> Start application with the minimum heap size that application
> >> requires - When the GC starts up, it runs frequently and
> >> efficiently because the heap is small
> >
> > I think this is a reasonable expectation for someone who doesn't
> > understand the Black Art of Garbage Collection, but I'm not sure it's
> > actually true. I'm not claiming that I know any better than you do,
> > but I suspect that the collector takes its parameters very seriously,
> > and when you introduce artificial constraints (such as a smaller
> > minimum heap size), the GC will attempt to respect those constraints.
> > The reality is that those constraints are completely unnecessary; you
> > have only imposed them because you think you know better than the GC
> > algorithm.
>
> Generally, the more memory available, the more efficient GC is. The
> general rule is you can optimise for any two of the following at the
> expense of the third:
> - low pause time
> - high throughout
> - low memory usage
>
> It has been a few years since I listened to the experts talk about it
> but a good rule of thumb used to be that you should size your heap 3-5
> times bigger than the minimum heap used once the application memory
> usages reaches steady state (i.e. the minimum value of the sawtooth on
> the heap usage graph)
>
>
Actually G1, which is very usable with java8 and default in jdk9, doesn't
produce the sawtooth graph anymore.
I also think the amount of memory has less influence on GC Performance in
G1 or Shenandoah, but instead influence if they would perform a STW phase
(which of course is also performance related, but differently).
But I am not an expert either, so I might be wrong here.

As for OP's original statement: "When the GC starts up, it runs frequently
and
efficiently because the heap is small", I don't think it is correct
anymore, especially not for G1, as long as the object size is reasonable
(not Humongous).


Leon


Re: GC allocation failure

2018-01-08 Thread Leon Rosenberg
On Mon, Jan 8, 2018 at 10:16 AM, Christopher Schultz <
ch...@christopherschultz.net> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Suvendu,
>
> On 1/5/18 6:46 AM, Suvendu Sekhar Mondal wrote:
> > I really never found any explanation behind this "initial=max" heap
> > size theory until I saw your mail; although I see this type of
> > configuration in most of the places. It will be awesome if you can
> > tell more about benefits of this configuration.
>
> It's really just about saving the time it takes to resize the heap.
> Because the JVM will never shrink the heap (at least not in any JVMs
> I'm familiar with), a long-running server-side process will (likely)
> eventually use all of the heap you allow it to use. Basically, memory
> exists to be used, so why not use all of it immediately?
>
> > I usually do not set initial and max heap size to same value
> > because garbage collection is delayed until the heap is full.
>
> The heap is divided into sections. The first heap to be GC'd after JVM
> launch is likely to be the eden space which is relatively small, and
> few objects will survive the GC operation (lots of temporary String
> objects, etc. will die without being tenured). The only spaces that
> take a "long" time to clean are the tenured generation and the (until
> recently named/replaced) permanent generation (which isn't actually
> permanent). Cleaning those spaces is long, but a GC to clean those
> spaces should not happen for a long time after the JVM starts.
>
> Also, most collector algorithms/strategies have two separate types of
> operation: a short/minor GC and a long/full GC. As long as short/minor
> GC operations take place regularly, you should not experience
> application-pauses while the heap is reorganized.
>
> Finally, application pauses are likely to be long if the entire heap
> must be re-sized because then *everything* must be re-located.
>
> > Therefore, the first time that the GC runs, the process can take
> > longer. Also, the heap is more likely to be fragmented and require
> > a heap compaction. To avoid that, till now my strategy is to: -
> > Start application with the minimum heap size that application
> > requires - When the GC starts up, it runs frequently and
> > efficiently because the heap is small
>
> I think this is a reasonable expectation for someone who doesn't
> understand the Black Art of Garbage Collection, but I'm not sure it's
> actually true. I'm not claiming that I know any better than you do,
> but I suspect that the collector takes its parameters very seriously,
> and when you introduce artificial constraints (such as a smaller
> minimum heap size), the GC will attempt to respect those constraints.
> The reality is that those constraints are completely unnecessary; you
> have only imposed them because you think you know better than the GC
> algorithm.
>
> > - When the heap is full of live objects, the GC compacts the heap.
> > If sufficient garbage is still not recovered or any of the other
> > conditions for heap expansion are met, the GC expands the heap.
> >
> > Another thing, what if I know the server load varies a lot(from 10s
> > in night time to 1s during day time) during different time
> > frame, does "initial=max heap" apply for that situation also?
>
> My position is that initial==heap is always the right recipe for a
> server-side JVM, regardless of the load profile. Setting initial < max
> may even cause an OOM at the OS level in the future if the memory is
> over-committed (or, rather, WILL BE over-committed if/when the heap
> must expand).
>
>
>
To add some 2 cents to what christoph said (which was a very correct
explanation already), the only valid exception to the initial=heap rule in
my eyes, is when you actually not sure how much memory your process will
need. And if you have a bunch of microservices on one machine, you may want
not to spend all the memory without need.
So start a little bit lower but give room for expansion in case the process
need it.
For example I have a VM with 13 'small' JMVs on it. The difference between
ms and mx would be about 5 GB. In this specific case I suppose it is ok, to
provide different values at least for some time, and adjust later.

However, reading gc logs or using tools like jclarity can help you find the
proper pool size for your collector/jvm version/application better. Unless
you release and change your memory usage pattern every week or so, in this
case using xms!=xmx seems ok to me, as a safety net.

regards
Leon


Re: GC allocation failure

2018-01-08 Thread Mark Thomas
On 08/01/18 15:16, Christopher Schultz wrote:



>> Therefore, the first time that the GC runs, the process can take
>> longer. Also, the heap is more likely to be fragmented and require
>> a heap compaction. To avoid that, till now my strategy is to: -
>> Start application with the minimum heap size that application
>> requires - When the GC starts up, it runs frequently and
>> efficiently because the heap is small
> 
> I think this is a reasonable expectation for someone who doesn't
> understand the Black Art of Garbage Collection, but I'm not sure it's
> actually true. I'm not claiming that I know any better than you do,
> but I suspect that the collector takes its parameters very seriously,
> and when you introduce artificial constraints (such as a smaller
> minimum heap size), the GC will attempt to respect those constraints.
> The reality is that those constraints are completely unnecessary; you
> have only imposed them because you think you know better than the GC
> algorithm.

Generally, the more memory available, the more efficient GC is. The
general rule is you can optimise for any two of the following at the
expense of the third:
- low pause time
- high throughout
- low memory usage

It has been a few years since I listened to the experts talk about it
but a good rule of thumb used to be that you should size your heap 3-5
times bigger than the minimum heap used once the application memory
usages reaches steady state (i.e. the minimum value of the sawtooth on
the heap usage graph)

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: GC allocation failure

2018-01-08 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Suvendu,

On 1/5/18 6:46 AM, Suvendu Sekhar Mondal wrote:
> I really never found any explanation behind this "initial=max" heap
> size theory until I saw your mail; although I see this type of
> configuration in most of the places. It will be awesome if you can
> tell more about benefits of this configuration.

It's really just about saving the time it takes to resize the heap.
Because the JVM will never shrink the heap (at least not in any JVMs
I'm familiar with), a long-running server-side process will (likely)
eventually use all of the heap you allow it to use. Basically, memory
exists to be used, so why not use all of it immediately?

> I usually do not set initial and max heap size to same value
> because garbage collection is delayed until the heap is full.

The heap is divided into sections. The first heap to be GC'd after JVM
launch is likely to be the eden space which is relatively small, and
few objects will survive the GC operation (lots of temporary String
objects, etc. will die without being tenured). The only spaces that
take a "long" time to clean are the tenured generation and the (until
recently named/replaced) permanent generation (which isn't actually
permanent). Cleaning those spaces is long, but a GC to clean those
spaces should not happen for a long time after the JVM starts.

Also, most collector algorithms/strategies have two separate types of
operation: a short/minor GC and a long/full GC. As long as short/minor
GC operations take place regularly, you should not experience
application-pauses while the heap is reorganized.

Finally, application pauses are likely to be long if the entire heap
must be re-sized because then *everything* must be re-located.

> Therefore, the first time that the GC runs, the process can take
> longer. Also, the heap is more likely to be fragmented and require
> a heap compaction. To avoid that, till now my strategy is to: -
> Start application with the minimum heap size that application
> requires - When the GC starts up, it runs frequently and
> efficiently because the heap is small

I think this is a reasonable expectation for someone who doesn't
understand the Black Art of Garbage Collection, but I'm not sure it's
actually true. I'm not claiming that I know any better than you do,
but I suspect that the collector takes its parameters very seriously,
and when you introduce artificial constraints (such as a smaller
minimum heap size), the GC will attempt to respect those constraints.
The reality is that those constraints are completely unnecessary; you
have only imposed them because you think you know better than the GC
algorithm.

> - When the heap is full of live objects, the GC compacts the heap.
> If sufficient garbage is still not recovered or any of the other
> conditions for heap expansion are met, the GC expands the heap.
> 
> Another thing, what if I know the server load varies a lot(from 10s
> in night time to 1s during day time) during different time
> frame, does "initial=max heap" apply for that situation also?

My position is that initial==heap is always the right recipe for a
server-side JVM, regardless of the load profile. Setting initial < max
may even cause an OOM at the OS level in the future if the memory is
over-committed (or, rather, WILL BE over-committed if/when the heap
must expand).

- -chris
-BEGIN PGP SIGNATURE-
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQJRBAEBCAA7FiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlpTi1MdHGNocmlzQGNo
cmlzdG9waGVyc2NodWx0ei5uZXQACgkQHPApP6U8pFjidA/+J6TwZ7KA3IOvzyx/
dPRX0QDF91DJRcSNiw1m2WnyBIzSsYD3Q16I5Y2A/6Lp5b5a6Sgr8Ru+hVYlFpms
8mAgm1M438DiE4oyPCleJfW95ias/h5O0mlsj5sbWs6LsACuy4jJp11ejMfMYrPf
SOzFQmD9ZNDRD4XzmYySliu7duyo0QLV1pNRZ92v1GFqowtO9bWoLTTyp/QoqgJL
gwiMUdykMfqdsH7j2/U7z1Y9sS91p2NnxBWVr+i8aI9QhWAaqP8Hzf8K47rCN1NG
zAFB/N7xwcUkK8M+3GhoReGJtIrhqjEGBjOCrkuVrdL3mWblZdUDDo/0SH3GtvuR
osISeTmmSfN3xQng8NQI0s6uKlG+bzAaJUffF2iP61AIRF33DJKte04n8ECGm1Ct
UF4HFEtZDL5Qb28sqrwR1HdpqkXb4FwPw/NTEfJCWVQzWwnEcekDUbYWQMVwCDMC
hbXNSwXTGalykDJIMjIqtV3yw6THUbT8GeUSC31/vADQLC2W8r3z1R1CVwdArfiy
zmBHe/pUw4ZZvnEuLdoIMUp7sARLUVIqSIMPWkXFbkOgNt7g5fAOd5lA0JUe1hJn
Wpgm62Pz836+ywIx0+e7QMZJorn3H54ayZvYTOWX19TWQ+gXiHmD56wXAnaaBg9N
40Fpa/phyOZ3zUvcj3g+nCeN8u4=
=ee4n
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: GC allocation failure

2018-01-07 Thread Olaf Kock



On 05.01.2018 12:46, Suvendu Sekhar Mondal wrote:

I really never found any explanation behind this "initial=max" heap size
theory until I saw your mail; although I see this type of configuration in
most of the places. It will be awesome if you can tell more about benefits
of this configuration.
The thing is: You're allowing your application to sooner or later 
allocate *max* heap, no matter what. Granted, when the server just 
starts up, it might require less, but in the end (say, after an hour, a 
day or a couple of days, it might end up at max anyway. Server processes 
typically run long. And if your OS runs out of memory, not being able to 
grant those last Gigs of memory that you're asking for /at some random 
time/, it will trigger an out-of-memory condition. Instead of detecting 
such a condition (to add drama) Sunday night at 3am, why not catch it 
when your process just starts up? Java will never return memory to the 
heap, unless you stop the process. So you can't plan to use that RAM for 
anything else anyways.
Imagine having 4G of available memory in your server. Now run your Java 
process with "initial=2G, max=8G". Try to predict when it'll fail.


For your desktop development machine, where you potentially constantly 
restart the server: Feel free to set different initial/max values. For a 
server that's supposed to be running long time: Admit that you're 
allocating that memory to tomcat anyways.

I usually do not set initial and max heap size to same value because
garbage collection is delayed until the heap is full. Therefore, the first
time that the GC runs, the process can take longer. Also, the heap is more
likely to be fragmented and require a heap compaction. To avoid that, till
now my strategy is to:
- Start application with the minimum heap size that application requires
- When the GC starts up, it runs frequently and efficiently because the
heap is small
- When the heap is full of live objects, the GC compacts the heap. If
sufficient garbage is still not recovered or any of the other conditions
for heap expansion are met, the GC expands the heap.
Are you sure (extra extra /extra/ sure) that this is indeed the 
specified condition under which the JVM allocates new memory from the 
OS? And that this condition is stable between different 
versions/releases/implementations?

Another thing, what if I know the server load varies a lot(from 10s in
night time to 1s during day time) during different time frame, does
"initial=max heap" apply for that situation also?
After running for a day, you might end up with max allocation anyways. 
If this allocation is followed by a low load phase, you haven't gained 
anything. Java is not returning unused memory to the OS.


And if you're relying on your application never exceeding a certain 
amount of memory, there's another thing to consider for memory allocation:


Measure your application's memory requirement. Allocate enough memory 
for your application's highest demand (plus some security margin) and 
cap it there. You basically want the lowest amount of memory allocated 
that suits your application in the long run, to keep GC frequent and 
quick, rather than infrequent but slow.
In testing, setting different initial/max values might help you to get 
closer to such a value. In production I wouldn't rely on it and rather 
want to know immediately (e.g. when starting the process) if enough 
memory is available - rather than Sunday night at 3am.


There's an argument that this was particularly necessary in the 32bit 
times, with the JVM demanding contiguous space. With 64bit address 
space, this particular aspect shouldn't be a problem any more - however 
the generally available memory (note: available to the process, not the 
hardware memory you stuck into your server) still can be an issue. You 
might want to conserve it.


Olaf

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: GC allocation failure

2018-01-05 Thread Suvendu Sekhar Mondal
On Jan 4, 2018 11:14 PM, "Rainer Jung"  wrote:

Am 04.01.2018 um 18:20 schrieb Christopher Schultz:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Ambica,
>
> On 1/4/18 11:17 AM, Sanka, Ambica wrote:
>
>> I am seeing below highlighted errors in native_err logs in all my
>> tomcat applications. I also increased memory for the VM from 4GB to
>> 8GB. Still seeing those. When do we get that errors? I am reading
>> online that when program asks for memory and java cannot give,
>> that's when we see them. Please suggest. Java HotSpot(TM) 64-Bit
>> Server VM (25.20-b23) for linux-amd64 JRE (1.8.0_20-b26), built on
>> Jul 30 2014 13:13:52 by "java_re" with gcc 4.3.0 20080428 (Red Hat
>> 4.3.0-8) Memory: 4k page, physical 8061572k(2564740k free), swap
>> 4063228k(4063228k free)
>>
>> CommandLine flags: -XX:+HeapDumpOnOutOfMemoryError
>> -XX:HeapDumpPath=/opt/apache/ancillariesmonitoring/logs/
>> -XX:InitialHeapSize=128985152 -XX:MaxHeapSize=268435456
>> -XX:+PrintGC -XX:+PrintGCTimeStamps -XX:+UseCompressedClassPointers
>> -XX:+UseCompressedOops -XX:+UseParallelGC
>>
>
> Others have commented on those messages you received, but nobody
> mentioned your heap configuration. In the above command-line
> arguments, you have specified both the minimum and maximum heap
> memory. You have expressed those values in bytes which makes it
> somewhat hard to read what they actually are, but this is what you
>

I *think* the JVM top line in GC output always shows bytes, even if you
were using other units in the original switches.


I agree.


have in readable units:
>
> - -XX:InitialHeapSize=128M -XX:MaxHeapSize=256M
>

but yes, that is a valid point!


So you aren't using an 8GiB heap. You aren't even using a 4GiB heap.
> You are using a 256 *megabyte* heap. If you really want an 8GiB heap,
> you'll need to set it properly in your command-line arguments.
>
> Note that setting the initial heap size to anything other than the
> maximum heap size just makes the JVM take longer to get the heap
> generations sized appropriately. For a long-running server process, I
> think it never makes any sense to set initial < max heap size. Always
> set them to the same value so that the heap itself does not have to be
> expanded/resized during heap allocations.


Christopher,

I really never found any explanation behind this "initial=max" heap size
theory until I saw your mail; although I see this type of configuration in
most of the places. It will be awesome if you can tell more about benefits
of this configuration.

I usually do not set initial and max heap size to same value because
garbage collection is delayed until the heap is full. Therefore, the first
time that the GC runs, the process can take longer. Also, the heap is more
likely to be fragmented and require a heap compaction. To avoid that, till
now my strategy is to:
- Start application with the minimum heap size that application requires
- When the GC starts up, it runs frequently and efficiently because the
heap is small
- When the heap is full of live objects, the GC compacts the heap. If
sufficient garbage is still not recovered or any of the other conditions
for heap expansion are met, the GC expands the heap.

Another thing, what if I know the server load varies a lot(from 10s in
night time to 1s during day time) during different time frame, does
"initial=max heap" apply for that situation also?

Please let me know what you think about it.

Thanks!
Suvendu


RE: GC allocation failure

2018-01-04 Thread Sanka, Ambica
Thank you. I will make initial and max heap to be same value.

Ambica Sanka
Sr J2EE IV Developer
office  703.661.7928

atpco.net
linkedIn  /  twitter @atpconews

45005 Aviation Drive
Dulles, VA 20166




-Original Message-
From: Christopher Schultz [mailto:ch...@christopherschultz.net] 
Sent: Thursday, January 04, 2018 12:20 PM
To: users@tomcat.apache.org
Subject: Re: GC allocation failure

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Ambica,

On 1/4/18 11:17 AM, Sanka, Ambica wrote:
> I am seeing below highlighted errors in native_err logs in all my 
> tomcat applications. I also increased memory for the VM from 4GB to 
> 8GB. Still seeing those. When do we get that errors? I am reading 
> online that when program asks for memory and java cannot give, that's 
> when we see them. Please suggest. Java HotSpot(TM) 64-Bit Server VM 
> (25.20-b23) for linux-amd64 JRE (1.8.0_20-b26), built on Jul 30 2014 
> 13:13:52 by "java_re" with gcc 4.3.0 20080428 (Red Hat
> 4.3.0-8) Memory: 4k page, physical 8061572k(2564740k free), swap 
> 4063228k(4063228k free)
> 
> CommandLine flags: -XX:+HeapDumpOnOutOfMemoryError 
> -XX:HeapDumpPath=/opt/apache/ancillariesmonitoring/logs/
> -XX:InitialHeapSize=128985152 -XX:MaxHeapSize=268435456 -XX:+PrintGC 
> -XX:+PrintGCTimeStamps -XX:+UseCompressedClassPointers 
> -XX:+UseCompressedOops -XX:+UseParallelGC

Others have commented on those messages you received, but nobody mentioned your 
heap configuration. In the above command-line arguments, you have specified 
both the minimum and maximum heap memory. You have expressed those values in 
bytes which makes it somewhat hard to read what they actually are, but this is 
what you have in readable units:

- -XX:InitialHeapSize=128M -XX:MaxHeapSize=256M

So you aren't using an 8GiB heap. You aren't even using a 4GiB heap.
You are using a 256 *megabyte* heap. If you really want an 8GiB heap, you'll 
need to set it properly in your command-line arguments.

Note that setting the initial heap size to anything other than the maximum heap 
size just makes the JVM take longer to get the heap generations sized 
appropriately. For a long-running server process, I think it never makes any 
sense to set initial < max heap size. Always set them to the same value so that 
the heap itself does not have to be expanded/resized during heap allocations.

Hope that helps,
- -chris
-BEGIN PGP SIGNATURE-
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQJRBAEBCAA7FiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlpOYkMdHGNocmlzQGNo
cmlzdG9waGVyc2NodWx0ei5uZXQACgkQHPApP6U8pFjKfBAAikZ9mfKhO5VcEGyd
spKC8m4Ot1N+qtkR02ftBf7Sh0CQRjMBFsQUzd2Y+F2w7lPT8bpCnxThKfrkjrkk
ySrF7mVF82aVUM72Abh65tK+E4HJhbZWzGAx7NtSx5XDS5ga9nFvJ42Ea/+pzqUf
ZQmnRIXhj4gWf+q8mk1bIeR0siSc9J7e575CxMkJWji4gIgLgVMMJTZ1Euwya83W
ohTe1Bi355kKiiX3ikRutFgv91fX5kSdNkf+u4huvEBccyDJRaK2MapJ+KOMVUbJ
OodFqlO4eFkeL/KxyclWr8OnAgPj4VaNfaq7jNzZyI5MpZymKhuy8uKnUN10XN8r
tZO/ZFroeEmLDpM6imPIj1eHcgq/emFg1gT9QW8G08WfWFkSF7fm60Xi3U+4/8si
uB3zCFXq9g5EjQ5p2MdpNyQPsHXm5E/J4iS5XyBKkjcuNkVfYneEMP+alOMHIIGI
SxS1Hb54VgV+//etPHgycVVoomw5JFW3erRkiMd6edQL5K9m/j+xHJhbr5nbcYKe
Nj3lPFPQ5hP02qySf+flZQYayX3HNgCXqhFfDDCANKejU7I4ZC2bSySrWrPkuTfc
Dgk+TXlvLRvZ5xWzyM8F1NlsJ/OV+mk23WIyGX7Riyqw9lPghzO+i1mHtyZzg2g8
8zBZXehds+nzTCCBP6MUNqH+I50=
=DPai
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: GC allocation failure

2018-01-04 Thread Rainer Jung

Am 04.01.2018 um 18:20 schrieb Christopher Schultz:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Ambica,

On 1/4/18 11:17 AM, Sanka, Ambica wrote:

I am seeing below highlighted errors in native_err logs in all my
tomcat applications. I also increased memory for the VM from 4GB to
8GB. Still seeing those. When do we get that errors? I am reading
online that when program asks for memory and java cannot give,
that's when we see them. Please suggest. Java HotSpot(TM) 64-Bit
Server VM (25.20-b23) for linux-amd64 JRE (1.8.0_20-b26), built on
Jul 30 2014 13:13:52 by "java_re" with gcc 4.3.0 20080428 (Red Hat
4.3.0-8) Memory: 4k page, physical 8061572k(2564740k free), swap
4063228k(4063228k free)

CommandLine flags: -XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/opt/apache/ancillariesmonitoring/logs/
-XX:InitialHeapSize=128985152 -XX:MaxHeapSize=268435456
-XX:+PrintGC -XX:+PrintGCTimeStamps -XX:+UseCompressedClassPointers
-XX:+UseCompressedOops -XX:+UseParallelGC


Others have commented on those messages you received, but nobody
mentioned your heap configuration. In the above command-line
arguments, you have specified both the minimum and maximum heap
memory. You have expressed those values in bytes which makes it
somewhat hard to read what they actually are, but this is what you


I *think* the JVM top line in GC output always shows bytes, even if you 
were using other units in the original switches.



have in readable units:

- -XX:InitialHeapSize=128M -XX:MaxHeapSize=256M


but yes, that is a valid point!


So you aren't using an 8GiB heap. You aren't even using a 4GiB heap.
You are using a 256 *megabyte* heap. If you really want an 8GiB heap,
you'll need to set it properly in your command-line arguments.

Note that setting the initial heap size to anything other than the
maximum heap size just makes the JVM take longer to get the heap
generations sized appropriately. For a long-running server process, I
think it never makes any sense to set initial < max heap size. Always
set them to the same value so that the heap itself does not have to be
expanded/resized during heap allocations.


Regards,

Rainer


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: GC allocation failure

2018-01-04 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Ambica,

On 1/4/18 11:17 AM, Sanka, Ambica wrote:
> I am seeing below highlighted errors in native_err logs in all my
> tomcat applications. I also increased memory for the VM from 4GB to
> 8GB. Still seeing those. When do we get that errors? I am reading
> online that when program asks for memory and java cannot give,
> that's when we see them. Please suggest. Java HotSpot(TM) 64-Bit
> Server VM (25.20-b23) for linux-amd64 JRE (1.8.0_20-b26), built on
> Jul 30 2014 13:13:52 by "java_re" with gcc 4.3.0 20080428 (Red Hat
> 4.3.0-8) Memory: 4k page, physical 8061572k(2564740k free), swap
> 4063228k(4063228k free)
> 
> CommandLine flags: -XX:+HeapDumpOnOutOfMemoryError
> -XX:HeapDumpPath=/opt/apache/ancillariesmonitoring/logs/
> -XX:InitialHeapSize=128985152 -XX:MaxHeapSize=268435456
> -XX:+PrintGC -XX:+PrintGCTimeStamps -XX:+UseCompressedClassPointers
> -XX:+UseCompressedOops -XX:+UseParallelGC

Others have commented on those messages you received, but nobody
mentioned your heap configuration. In the above command-line
arguments, you have specified both the minimum and maximum heap
memory. You have expressed those values in bytes which makes it
somewhat hard to read what they actually are, but this is what you
have in readable units:

- -XX:InitialHeapSize=128M -XX:MaxHeapSize=256M

So you aren't using an 8GiB heap. You aren't even using a 4GiB heap.
You are using a 256 *megabyte* heap. If you really want an 8GiB heap,
you'll need to set it properly in your command-line arguments.

Note that setting the initial heap size to anything other than the
maximum heap size just makes the JVM take longer to get the heap
generations sized appropriately. For a long-running server process, I
think it never makes any sense to set initial < max heap size. Always
set them to the same value so that the heap itself does not have to be
expanded/resized during heap allocations.

Hope that helps,
- -chris
-BEGIN PGP SIGNATURE-
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQJRBAEBCAA7FiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlpOYkMdHGNocmlzQGNo
cmlzdG9waGVyc2NodWx0ei5uZXQACgkQHPApP6U8pFjKfBAAikZ9mfKhO5VcEGyd
spKC8m4Ot1N+qtkR02ftBf7Sh0CQRjMBFsQUzd2Y+F2w7lPT8bpCnxThKfrkjrkk
ySrF7mVF82aVUM72Abh65tK+E4HJhbZWzGAx7NtSx5XDS5ga9nFvJ42Ea/+pzqUf
ZQmnRIXhj4gWf+q8mk1bIeR0siSc9J7e575CxMkJWji4gIgLgVMMJTZ1Euwya83W
ohTe1Bi355kKiiX3ikRutFgv91fX5kSdNkf+u4huvEBccyDJRaK2MapJ+KOMVUbJ
OodFqlO4eFkeL/KxyclWr8OnAgPj4VaNfaq7jNzZyI5MpZymKhuy8uKnUN10XN8r
tZO/ZFroeEmLDpM6imPIj1eHcgq/emFg1gT9QW8G08WfWFkSF7fm60Xi3U+4/8si
uB3zCFXq9g5EjQ5p2MdpNyQPsHXm5E/J4iS5XyBKkjcuNkVfYneEMP+alOMHIIGI
SxS1Hb54VgV+//etPHgycVVoomw5JFW3erRkiMd6edQL5K9m/j+xHJhbr5nbcYKe
Nj3lPFPQ5hP02qySf+flZQYayX3HNgCXqhFfDDCANKejU7I4ZC2bSySrWrPkuTfc
Dgk+TXlvLRvZ5xWzyM8F1NlsJ/OV+mk23WIyGX7Riyqw9lPghzO+i1mHtyZzg2g8
8zBZXehds+nzTCCBP6MUNqH+I50=
=DPai
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: GC allocation failure

2018-01-04 Thread Rainer Jung

Hi Ambica,

Am 04.01.2018 um 17:17 schrieb Sanka, Ambica:

I am seeing below highlighted errors in native_err logs in all my tomcat 
applications. I also increased memory for the VM from 4GB to 8GB. Still seeing 
those. When do we get that errors?
I am reading online that when program asks for memory and java cannot give, 
that's when we see them. Please suggest.
Java HotSpot(TM) 64-Bit Server VM (25.20-b23) for linux-amd64 JRE (1.8.0_20-b26), built 
on Jul 30 2014 13:13:52 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 8061572k(2564740k free), swap 4063228k(4063228k free)
CommandLine flags: -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/opt/apache/ancillariesmonitoring/logs/ 
-XX:InitialHeapSize=128985152 -XX:MaxHeapSize=268435456 -XX:+PrintGC 
-XX:+PrintGCTimeStamps -XX:+UseCompressedClassPointers -XX:+UseCompressedOops 
-XX:+UseParallelGC
3.203: [GC (Allocation Failure)  31744K->6311K(121856K), 0.0097261 secs]
3.578: [GC (Allocation Failure)  38055K->12368K(121856K), 0.0089875 secs]
3.756: [GC (Allocation Failure)  44112K->19589K(121856K), 0.0100339 secs]
3.897: [GC (Allocation Failure)  51333K->25872K(153600K), 0.0092326 secs]
4.172: [GC (Allocation Failure)  89360K->38878K(153600K), 0.0152940 secs]
4.417: [GC (Allocation Failure)  102366K->50311K(148480K), 0.0148816 secs]
4.594: [GC (Allocation Failure)  95367K->49903K(151040K), 0.0197327 secs]
4.765: [GC (Allocation Failure)  94959K->50213K(148992K), 0.0149008 secs]
4.946: [GC (Allocation Failure)  96293K->52257K(150528K), 0.0172634 secs]
5.129: [GC (Allocation Failure)  98337K->53118K(151040K), 0.0139426 secs]
5.313: [GC (Allocation Failure)  102270K->53234K(152064K), 0.0122307 secs]
5.498: [GC (Allocation Failure)  102386K->53579K(153088K), 0.0166336 secs]
5.655: [GC (Allocation Failure)  104779K->54486K(153600K), 0.0161735 secs]
6.885: [GC (Allocation Failure)  105686K->51523K(153600K), 0.0123126 secs]


These messages are normal, as long as there are not other problems or 
errors they are nothing to worry about.


Java manages memory in regions of different sizes and meaning. 
Allocation for new objects is done in the so-called eden space. This 
memory region is managed in a very simple way. The JVM allocates from it 
until it is full (not enough free space left for the current 
allocation). Then it interrupts the application and runs a Garbage 
Collection (GC) for this memory region, copying any objects which are 
still alive from this region into another one (typically into one of the 
two survivor spaces). At the end of the GC run, eden will be fully 
cleared and the application can continue, again allocating from eden.


The above message is shown, whenever a GC run for eden happens. The 
reason for the GC run is shown, here "(Allocation Failure)". The GC for 
eden in your case takes about 10-20 milliseconds and runs about 4-5 
times per second. The string "Failure" is somewhat misleading, the 
failed allocation will be retried and typically succeeds once the GC 
finishes.


Although you can adjust eden size with specific JVM flags, you probably 
have only set the heap size, which is the combined size of several JVM 
memory regions. In that case the JVM will try to auto-tune eden size. If 
you want to set eden size explicitly, you might need to do more 
measurements to deduce good settings from those. That would be a 
somewhat more difficult and not Tomcat specific topic.


Unrelated: note that you JVM 8 patch level 20 is very old.

Regards,

Rainer

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: GC allocation failure

2018-01-04 Thread Suvendu Sekhar Mondal
Ambica,

On Jan 4, 2018 9:47 PM, "Sanka, Ambica" <asa...@atpco.net> wrote:

I am seeing below highlighted errors in native_err logs in all my tomcat
applications. I also increased memory for the VM from 4GB to 8GB. Still
seeing those. When do we get that errors?


It is not an error. It is a very normal phenomenon for all Java based
application.

I am reading online that when program asks for memory and java cannot give,
that's when we see them. Please suggest.


That's true. Imagine this scenario: you have a warehouse where you keep
different types of stuff. Say you kept adding new stuffs daily. One day
you'll eventually run out of space. On that day you have two options:
 1. get rid off some old stuffs which are not needed and make room for the
new stuffs
2. Extend your old warehouse

Same thing happens when you run Java programs. What you are seeing in the
log that's called Garbage Collection(GC) and similar to opt#1. What you did
by increasing memory is like opt#2.

Again, GC activity is normal until that operation takes long time and
affect your application response time. I will suggest that please read
about Garbage Collection in Java. Google is your friend.

Thanks!
Suvendu

Java HotSpot(TM) 64-Bit Server VM (25.20-b23) for linux-amd64 JRE
(1.8.0_20-b26), built on Jul 30 2014 13:13:52 by "java_re" with gcc 4.3.0
20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 8061572k(2564740k free), swap 4063228k(4063228k
free)
CommandLine flags: -XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/opt/apache/ancillariesmonitoring/logs/
-XX:InitialHeapSize=128985152 -XX:MaxHeapSize=268435456 -XX:+PrintGC
-XX:+PrintGCTimeStamps -XX:+UseCompressedClassPointers
-XX:+UseCompressedOops -XX:+UseParallelGC
3.203: [GC (Allocation Failure)  31744K->6311K(121856K), 0.0097261 secs]
3.578: [GC (Allocation Failure)  38055K->12368K(121856K), 0.0089875 secs]
3.756: [GC (Allocation Failure)  44112K->19589K(121856K), 0.0100339 secs]
3.897: [GC (Allocation Failure)  51333K->25872K(153600K), 0.0092326 secs]
4.172: [GC (Allocation Failure)  89360K->38878K(153600K), 0.0152940 secs]
4.417: [GC (Allocation Failure)  102366K->50311K(148480K), 0.0148816 secs]
4.594: [GC (Allocation Failure)  95367K->49903K(151040K), 0.0197327 secs]
4.765: [GC (Allocation Failure)  94959K->50213K(148992K), 0.0149008 secs]
4.946: [GC (Allocation Failure)  96293K->52257K(150528K), 0.0172634 secs]
5.129: [GC (Allocation Failure)  98337K->53118K(151040K), 0.0139426 secs]
5.313: [GC (Allocation Failure)  102270K->53234K(152064K), 0.0122307 secs]
5.498: [GC (Allocation Failure)  102386K->53579K(153088K), 0.0166336 secs]
5.655: [GC (Allocation Failure)  104779K->54486K(153600K), 0.0161735 secs]
6.885: [GC (Allocation Failure)  105686K->51523K(153600K), 0.0123126 secs]

Thanks
Ambica.


GC allocation failure

2018-01-04 Thread Sanka, Ambica
I am seeing below highlighted errors in native_err logs in all my tomcat 
applications. I also increased memory for the VM from 4GB to 8GB. Still seeing 
those. When do we get that errors?
I am reading online that when program asks for memory and java cannot give, 
that's when we see them. Please suggest.
Java HotSpot(TM) 64-Bit Server VM (25.20-b23) for linux-amd64 JRE 
(1.8.0_20-b26), built on Jul 30 2014 13:13:52 by "java_re" with gcc 4.3.0 
20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 8061572k(2564740k free), swap 4063228k(4063228k free)
CommandLine flags: -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/opt/apache/ancillariesmonitoring/logs/ 
-XX:InitialHeapSize=128985152 -XX:MaxHeapSize=268435456 -XX:+PrintGC 
-XX:+PrintGCTimeStamps -XX:+UseCompressedClassPointers -XX:+UseCompressedOops 
-XX:+UseParallelGC
3.203: [GC (Allocation Failure)  31744K->6311K(121856K), 0.0097261 secs]
3.578: [GC (Allocation Failure)  38055K->12368K(121856K), 0.0089875 secs]
3.756: [GC (Allocation Failure)  44112K->19589K(121856K), 0.0100339 secs]
3.897: [GC (Allocation Failure)  51333K->25872K(153600K), 0.0092326 secs]
4.172: [GC (Allocation Failure)  89360K->38878K(153600K), 0.0152940 secs]
4.417: [GC (Allocation Failure)  102366K->50311K(148480K), 0.0148816 secs]
4.594: [GC (Allocation Failure)  95367K->49903K(151040K), 0.0197327 secs]
4.765: [GC (Allocation Failure)  94959K->50213K(148992K), 0.0149008 secs]
4.946: [GC (Allocation Failure)  96293K->52257K(150528K), 0.0172634 secs]
5.129: [GC (Allocation Failure)  98337K->53118K(151040K), 0.0139426 secs]
5.313: [GC (Allocation Failure)  102270K->53234K(152064K), 0.0122307 secs]
5.498: [GC (Allocation Failure)  102386K->53579K(153088K), 0.0166336 secs]
5.655: [GC (Allocation Failure)  104779K->54486K(153600K), 0.0161735 secs]
6.885: [GC (Allocation Failure)  105686K->51523K(153600K), 0.0123126 secs]

Thanks
Ambica.