Re: if the heap size exceeds 32GB..

2018-02-13 Thread Thakrar, Jayesh
Sure, here are our settings:

#
# Simpler, new generation G1GC settings.
#
JVM_OPTS="$JVM_OPTS -XX:+UseG1GC"
JVM_OPTS="$JVM_OPTS -XX:+UnlockExperimentalVMOptions"
JVM_OPTS="$JVM_OPTS -XX:+ParallelRefProcEnabled"
JVM_OPTS="$JVM_OPTS -XX:MaxGCPauseMillis=50"
JVM_OPTS="$JVM_OPTS -XX:MaxTenuringThreshold=2"
#

# GC logging options -- uncomment to enable
JVM_OPTS="$JVM_OPTS -XX:+PrintGCDetails"
JVM_OPTS="$JVM_OPTS -XX:+PrintGCDateStamps"
JVM_OPTS="$JVM_OPTS -XX:+PrintGCTimeStamps"
JVM_OPTS="$JVM_OPTS -XX:+PrintHeapAtGC"
JVM_OPTS="$JVM_OPTS -XX:+PrintTenuringDistribution"
JVM_OPTS="$JVM_OPTS -XX:+PrintGCApplicationStoppedTime"
JVM_OPTS="$JVM_OPTS -XX:+PrintPromotionFailure"
JVM_OPTS="$JVM_OPTS -Xloggc:/home/vchadoop/var/logs/cassandra/cassandra-gc.log"
JVM_OPTS="$JVM_OPTS -XX:+UseGCLogFileRotation"
JVM_OPTS="$JVM_OPTS -XX:NumberOfGCLogFiles=10"
JVM_OPTS="$JVM_OPTS -XX:GCLogFileSize=1M"

#
MAX_HEAP_SIZE="84G"
HEAP_NEWSIZE="2G"
#

The only issue that we currently have and are looking to fix it soon is the 
need to upgrade our old JDK version and to set metaspace to a higher value.
We found that when the Java runtime reaches the high watermark, it induces a a 
full GC even if there is plenty of memory to expand the heap.

{Heap before GC invocations=1 (full 1):
garbage-first heap   total 88080384K, used 655025K [0x7fdd6000, 
0x7ff26000, 0x7ff26000)
  region size 32768K, 20 young (655360K), 0 survivors (0K)
Metaspace   used 34166K, capacity 35325K, committed 35328K, reserved 36864K
2018-01-05T08:10:31.491+: 81.789: [Full GC (Metadata GC Threshold)  
651M->30M(84G), 0.6598667 secs]
   [Eden: 640.0M(2048.0M)->0.0B(2048.0M) Survivors: 0.0B->0.0B Heap: 
651.4M(84.0G)->30.4M(84.0G)], [Metaspace: 34166K->34162K(36864K)]
Heap after GC invocations=2 (full 2):
garbage-first heap   total 88080384K, used 31140K [0x7fdd6000, 
0x7ff26000, 0x7ff26000)
  region size 32768K, 0 young (0K), 0 survivors (0K)
Metaspace   used 34162K, capacity 35315K, committed 35328K, reserved 36864K
}
[Times: user=0.67 sys=0.00, real=0.66 secs]



From: Jeff Jirsa <jji...@gmail.com>
Date: Tuesday, February 13, 2018 at 11:40 AM
To: cassandra <user@cassandra.apache.org>
Cc: "Steinmaurer, Thomas" <thomas.steinmau...@dynatrace.com>
Subject: Re: if the heap size exceeds 32GB..

I'm not Jayesh, but I assume they're using G1GC (or one of the proprietary 
collectors) that does a lot more concurrent marking without STW.

If you ran 84G heaps with CMS, you'd either need to dramatically tune your CMS 
initiating occupancy, or you'd probably see horrible, horrible pauses.





On Tue, Feb 13, 2018 at 8:44 AM, James Rothering 
<jrother...@codojo.me<mailto:jrother...@codojo.me>> wrote:
Wow, an 84GB heap! Would you mind disclosing the kind of data requirements 
behind this choice? And what kind of STW GC pauses do you see?





On Tue, Feb 13, 2018 at 10:06 AM, Thakrar, Jayesh 
<jthak...@conversantmedia.com<mailto:jthak...@conversantmedia.com>> wrote:
In most cases, Cassandra is pretty efficient about memory usage.
However, if your use case does require/need/demand more memory for your 
workload, I would not hesitate to use heap > 32 GB.
FYI, we have configured our heap for 84 GB.
However there's more tuning that we have done beyond just the heap, so make 
sure you are aware of what else needs to be done.

From: "Steinmaurer, Thomas" 
<thomas.steinmau...@dynatrace.com<mailto:thomas.steinmau...@dynatrace.com>>
Date: Tuesday, February 13, 2018 at 1:49 AM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" 
<user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Subject: RE: if the heap size exceeds 32GB..

Stick with 31G in your case. Another article on compressed Oops: 
https://blog.codecentric.de/en/2014/02/35gb-heap-less-32gb-java-jvm-memory-oddities/

Thomas

From: Eunsu Kim [mailto:eunsu.bil...@gmail.com<mailto:eunsu.bil...@gmail.com>]
Sent: Dienstag, 13. Februar 2018 08:09
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: if the heap size exceeds 32GB..

https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html#compressed_oops

According to the article above, if the heap size of the JVM is about 32GB, it 
is a waste of memory because it can not use the compress object pointer. (Of 
course talking about ES)

But if this is a general theory about the JVM, does that apply to Cassandra as 
well?

I am using a 64 GB physical memory server and I am concerned about heap size 
allocation.

Thank you.
The contents of this e-mail are intended for the named addressee only. It 
contains information that may be confiden

Re: if the heap size exceeds 32GB..

2018-02-13 Thread Jeff Jirsa
I'm not Jayesh, but I assume they're using G1GC (or one of the proprietary
collectors) that does a lot more concurrent marking without STW.

If you ran 84G heaps with CMS, you'd either need to dramatically tune your
CMS initiating occupancy, or you'd probably see horrible, horrible pauses.





On Tue, Feb 13, 2018 at 8:44 AM, James Rothering <jrother...@codojo.me>
wrote:

> Wow, an 84GB heap! Would you mind disclosing the kind of data requirements
> behind this choice? And what kind of STW GC pauses do you see?
>
>
>
>
>
> On Tue, Feb 13, 2018 at 10:06 AM, Thakrar, Jayesh <
> jthak...@conversantmedia.com> wrote:
>
>> In most cases, Cassandra is pretty efficient about memory usage.
>>
>> However, if your use case does require/need/demand more memory for your
>> workload, I would not hesitate to use heap > 32 GB.
>>
>> FYI, we have configured our heap for 84 GB.
>>
>> However there's more tuning that we have done beyond just the heap, so
>> make sure you are aware of what else needs to be done.
>>
>>
>>
>> *From: *"Steinmaurer, Thomas" <thomas.steinmau...@dynatrace.com>
>> *Date: *Tuesday, February 13, 2018 at 1:49 AM
>> *To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
>> *Subject: *RE: if the heap size exceeds 32GB..
>>
>>
>>
>> Stick with 31G in your case. Another article on compressed Oops:
>> https://blog.codecentric.de/en/2014/02/35gb-heap-less-32gb-
>> java-jvm-memory-oddities/
>>
>>
>>
>> Thomas
>>
>>
>>
>> *From:* Eunsu Kim [mailto:eunsu.bil...@gmail.com]
>> *Sent:* Dienstag, 13. Februar 2018 08:09
>> *To:* user@cassandra.apache.org
>> *Subject:* if the heap size exceeds 32GB..
>>
>>
>>
>> https://www.elastic.co/guide/en/elasticsearch/guide/current/
>> heap-sizing.html#compressed_oops
>>
>>
>>
>> According to the article above, if the heap size of the JVM is about
>> 32GB, it is a waste of memory because it can not use the compress object
>> pointer. (Of course talking about ES)
>>
>>
>>
>> But if this is a general theory about the JVM, does that apply to
>> Cassandra as well?
>>
>>
>>
>> I am using a 64 GB physical memory server and I am concerned about heap
>> size allocation.
>>
>>
>>
>> Thank you.
>>
>> The contents of this e-mail are intended for the named addressee only. It
>> contains information that may be confidential. Unless you are the named
>> addressee or an authorized designee, you may not copy or use it, or
>> disclose it to anyone else. If you received it in error please notify us
>> immediately and then destroy it. Dynatrace Austria GmbH (registration
>> number FN 91482h) is a company registered in Linz whose registered office
>> is at 4040 Linz, Austria, Freistädterstraße 313
>>
>
>


Re: if the heap size exceeds 32GB..

2018-02-13 Thread James Rothering
Wow, an 84GB heap! Would you mind disclosing the kind of data requirements
behind this choice? And what kind of STW GC pauses do you see?





On Tue, Feb 13, 2018 at 10:06 AM, Thakrar, Jayesh <
jthak...@conversantmedia.com> wrote:

> In most cases, Cassandra is pretty efficient about memory usage.
>
> However, if your use case does require/need/demand more memory for your
> workload, I would not hesitate to use heap > 32 GB.
>
> FYI, we have configured our heap for 84 GB.
>
> However there's more tuning that we have done beyond just the heap, so
> make sure you are aware of what else needs to be done.
>
>
>
> *From: *"Steinmaurer, Thomas" <thomas.steinmau...@dynatrace.com>
> *Date: *Tuesday, February 13, 2018 at 1:49 AM
> *To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
> *Subject: *RE: if the heap size exceeds 32GB..
>
>
>
> Stick with 31G in your case. Another article on compressed Oops:
> https://blog.codecentric.de/en/2014/02/35gb-heap-less-
> 32gb-java-jvm-memory-oddities/
>
>
>
> Thomas
>
>
>
> *From:* Eunsu Kim [mailto:eunsu.bil...@gmail.com]
> *Sent:* Dienstag, 13. Februar 2018 08:09
> *To:* user@cassandra.apache.org
> *Subject:* if the heap size exceeds 32GB..
>
>
>
> https://www.elastic.co/guide/en/elasticsearch/guide/
> current/heap-sizing.html#compressed_oops
>
>
>
> According to the article above, if the heap size of the JVM is about 32GB,
> it is a waste of memory because it can not use the compress object pointer.
> (Of course talking about ES)
>
>
>
> But if this is a general theory about the JVM, does that apply to
> Cassandra as well?
>
>
>
> I am using a 64 GB physical memory server and I am concerned about heap
> size allocation.
>
>
>
> Thank you.
>
> The contents of this e-mail are intended for the named addressee only. It
> contains information that may be confidential. Unless you are the named
> addressee or an authorized designee, you may not copy or use it, or
> disclose it to anyone else. If you received it in error please notify us
> immediately and then destroy it. Dynatrace Austria GmbH (registration
> number FN 91482h) is a company registered in Linz whose registered office
> is at 4040 Linz, Austria, Freistädterstraße 313
>


Re: if the heap size exceeds 32GB..

2018-02-13 Thread Thakrar, Jayesh
In most cases, Cassandra is pretty efficient about memory usage.
However, if your use case does require/need/demand more memory for your 
workload, I would not hesitate to use heap > 32 GB.
FYI, we have configured our heap for 84 GB.
However there's more tuning that we have done beyond just the heap, so make 
sure you are aware of what else needs to be done.

From: "Steinmaurer, Thomas" <thomas.steinmau...@dynatrace.com>
Date: Tuesday, February 13, 2018 at 1:49 AM
To: "user@cassandra.apache.org" <user@cassandra.apache.org>
Subject: RE: if the heap size exceeds 32GB..

Stick with 31G in your case. Another article on compressed Oops: 
https://blog.codecentric.de/en/2014/02/35gb-heap-less-32gb-java-jvm-memory-oddities/

Thomas

From: Eunsu Kim [mailto:eunsu.bil...@gmail.com]
Sent: Dienstag, 13. Februar 2018 08:09
To: user@cassandra.apache.org
Subject: if the heap size exceeds 32GB..

https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html#compressed_oops

According to the article above, if the heap size of the JVM is about 32GB, it 
is a waste of memory because it can not use the compress object pointer. (Of 
course talking about ES)

But if this is a general theory about the JVM, does that apply to Cassandra as 
well?

I am using a 64 GB physical memory server and I am concerned about heap size 
allocation.

Thank you.
The contents of this e-mail are intended for the named addressee only. It 
contains information that may be confidential. Unless you are the named 
addressee or an authorized designee, you may not copy or use it, or disclose it 
to anyone else. If you received it in error please notify us immediately and 
then destroy it. Dynatrace Austria GmbH (registration number FN 91482h) is a 
company registered in Linz whose registered office is at 4040 Linz, Austria, 
Freistädterstraße 313


RE: if the heap size exceeds 32GB..

2018-02-12 Thread Steinmaurer, Thomas
Stick with 31G in your case. Another article on compressed Oops: 
https://blog.codecentric.de/en/2014/02/35gb-heap-less-32gb-java-jvm-memory-oddities/

Thomas

From: Eunsu Kim [mailto:eunsu.bil...@gmail.com]
Sent: Dienstag, 13. Februar 2018 08:09
To: user@cassandra.apache.org
Subject: if the heap size exceeds 32GB..

https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html#compressed_oops

According to the article above, if the heap size of the JVM is about 32GB, it 
is a waste of memory because it can not use the compress object pointer. (Of 
course talking about ES)

But if this is a general theory about the JVM, does that apply to Cassandra as 
well?

I am using a 64 GB physical memory server and I am concerned about heap size 
allocation.

Thank you.
The contents of this e-mail are intended for the named addressee only. It 
contains information that may be confidential. Unless you are the named 
addressee or an authorized designee, you may not copy or use it, or disclose it 
to anyone else. If you received it in error please notify us immediately and 
then destroy it. Dynatrace Austria GmbH (registration number FN 91482h) is a 
company registered in Linz whose registered office is at 4040 Linz, Austria, 
Freist?dterstra?e 313


Re: if the heap size exceeds 32GB..

2018-02-12 Thread Ben Wood
Here is a useful guide on tuning Cassandra heap
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsTuneJVM.html#opsTuneJVM__tuning-the-java-heap
.

TL;DR You wouldn't want to allocate more than 1/2 of physical memory to
heap so you wouldn't exceed 32 GB anyway.



On Mon, Feb 12, 2018 at 11:08 PM, Eunsu Kim  wrote:

> https://www.elastic.co/guide/en/elasticsearch/guide/
> current/heap-sizing.html#compressed_oops
>
> According to the article above, if the heap size of the JVM is about 32GB,
> it is a waste of memory because it can not use the compress object pointer.
> (Of course talking about ES)
>
> But if this is a general theory about the JVM, does that apply to
> Cassandra as well?
>
> I am using a 64 GB physical memory server and I am concerned about heap
> size allocation.
>
> Thank you.
>



-- 
Ben Wood
Software Engineer - Data Agility
Mesosphere