Thanks Alain!
We are using TWCS compaction, and I read your blog multiple times - it was
very useful, thanks!
We are seeing a lot of overlapping SSTables, leading to a lot of problems:
(a) large number of tombstones read in queries, (b) high CPU usage, (c)
fairly long Young Gen GC collection
Hi,
Is there a way to estimate the size of table and index ?
I know we can estimate the size once table and index are already created using
nodetool cfstats, but I want to know before loading data into table. Could you
please help if there is any such formula to find out.
Thanks and Regards
Hi,
Is there a way to estimate the size of table and index ?
I know we can estimate the size once table and index are already created using
nodetool cfstats, but I want to know before loading data into table. Could you
please help if there is any such formula to find out.
Thanks and Regards
Not really, my suggested primary key is similar to the one you have in your
proposed MV. The only difference is that in MV it is Cassandra that takes
care of data synchronization, with manual denormalization you would need
to do it yourself. Example with MV: If you had username 'andreas1988' and
Thanks Crisan .
I understand what you're saying. But according to your suggestion I will
have a record for every entry while I am interested only on the last entry
. So the proposed solution is actually keeping much more data then needed .
On Oct 9, 2017 8:40 PM, "Valentina Crisan"
Hi,
my previously mentioned G1 bug does not seem to be related to your case
Thomas
From: Gustavo Scudeler [mailto:scudel...@gmail.com]
Sent: Montag, 09. Oktober 2017 15:13
To: user@cassandra.apache.org
Subject: Re: Cassandra and G1 Garbage collector stop the world event (STW)
Hello,
@kurt
Allow filtering is almost never the answer, especially when you want to do
a full table scan ( there might be some cases where the query is limited to
a partition and allow filtering could be used). And you would like to run
this query every minute - thus extremely good performance is required.
Can you share your schema and cfstats? This sounds kinda like a wide
partition, backed up compactions, or tombstone issue for it to create so
much and have issues like that so quickly with those settings.
A heap dump would be most telling but they are rather large and hard to
share.
Chris
On
Hello,
@kurt greaves: Have you tried CMS with that sized heap?
Yes, for testing for testing purposes, I have 3 nodes with CMS and 3 with
G1. The behavior is basically the same.
*Using CMS suggested settings*
Hi
I have the following table:
CREATE TABLE users (
username text,
last_seen bigint,
PRIMARY KEY (username)
);
where* last_seen* is basically the writetime . Number of records in the
table is aprox 10 million. Insert is pretty much straightforward insert
into users (username,
Hi,
although not happening here with Cassandra (due to using CMS), we had some
weird problem with our server application e.g. hit by the following JVM/G1 bugs:
https://bugs.openjdk.java.net/browse/JDK-8140597
https://bugs.openjdk.java.net/browse/JDK-8141402 (more or less a duplicate of
above)
Have you tried CMS with that sized heap? G1 is only really worthwhile with
24gb+ heap size, which wouldn't really make sense on machines with 28gb of
RAM. In general CMS is found to work better for C*, leaving excess memory
to be utilised by the OS page cache
Hi guys,
We have a 6 node Cassandra Cluster under heavy utilization. We have been
dealing a lot with garbage collector stop the world event, which can take
up to 50 seconds in our nodes, in the meantime Cassandra Node is
unresponsive, not even accepting new logins.
Extra details:
- Cassandra
13 matches
Mail list logo