Not AFAIK; https://issues.apache.org/jira/browse/CASSANDRA-9472 
<https://issues.apache.org/jira/browse/CASSANDRA-9472> is marked as resolved in 
3.4, though we are not running it so I can’t say much about it.

It looks like Zing is no longer tied price wise per core which was a show 
stopper for us, but it is now priced per server which may affect others 
differently.

Note in fact ironically, running 2.1.x with off heap memtables, we had some of 
our JVMs running for over a year which made us hit 
https://issues.apache.org/jira/browse/CASSANDRA-10969 
<https://issues.apache.org/jira/browse/CASSANDRA-10969> when we restarted some 
nodes for other reasons.

> On Nov 26, 2016, at 12:07 AM, Oleksandr Shulgin 
> <oleksandr.shul...@zalando.de> wrote:
> 
> On Nov 25, 2016 23:47, "Graham Sanderson" <gra...@vast.com 
> <mailto:gra...@vast.com>> wrote:
> If you are seeing 25-30 second GC pauses then (unless you are so badly 
> configured) seeing full GC under CMS (though G1 may have similar problems).
> 
> With CMS eventual fragmentation causing promotion failure is inevitable 
> (unless you cycle your nodes before it happens). Either your heap has way too 
> big an old gen, or too small a young gen (but then you need pretty hefty 
> boxes to be able to run with a large young gen - of the say 4-8G range) 
> without young collections taking too long.
> 
> Depending on your C* version I would highly recommend off heap men-tables. 
> With those we were able to considerably reduce our heap sizes, despite having 
> large throughput on a smallish number of nodes.
> 
> Aren't offheap memtables discontinued in the most recent releases of 3.0 and 
> 3.x for a good reason? I thought using them could lead to segfaults?
> 
> --
> Alex
> 
> I recommend reading this if you use CMS 
> http://blog.ragozin.info/2011/10/java-cg-hotspots-cms-and-heap.html 
> <http://blog.ragozin.info/2011/10/java-cg-hotspots-cms-and-heap.html>, and 
> also not that if you see a lot of objects of size 131074 in promotion 
> failures then memtables are the problem - you can try and flush them sooner, 
> but moving them off heap works better I think.
> 
>> On Nov 25, 2016, at 4:38 PM, Kant Kodali <k...@peernova.com 
>> <mailto:k...@peernova.com>> wrote:
>> 
>> +1 Chris Lohfink response
>> 
>> I would also restate the following sentence "java GC pauses are pretty much 
>> a fact of life" to "Any GC based system pauses are pretty much a fact of 
>> life".
>> 
>> I would be more than happy to see if someone can counter prove.
>> 
>> 
>> 
>> On Fri, Nov 25, 2016 at 1:41 PM, Chris Lohfink <clohfin...@gmail.com 
>> <mailto:clohfin...@gmail.com>> wrote:
>> No tuning will eliminate gcs.
>> 
>> 20-30 seconds is horrific and out of the ordinary. Most likely implementing 
>> antipatterns and/or poorly configured. Sub 1s is realistic but with some 
>> workloads still may require some tuning to maintain. Some workloads are very 
>> unfriendly to GCs though (ie heavy tombstones, very wide partitions).
>> 
>> Chris
>> 
>> On Fri, Nov 25, 2016 at 3:25 PM, S Ahmed <sahmed1...@gmail.com 
>> <mailto:sahmed1...@gmail.com>> wrote:
>> Hello!
>> 
>> From what I understand java GC pauses are pretty much a fact of life, but 
>> you can tune the jvm to reduce the likelihood of the frequency and length of 
>> GC pauses.
>> 
>> When using Cassandra, how frequent or long have these pauses known to be?  
>> Even with tuning, is it safe to assume they cannot be eliminated?
>> 
>> Would a 20-30 second pause be something out of the ordinary?
>> 
>> Thanks.
>> 
>> 
> 
> 

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to