We're not all the way there yet with native. But the increased GC time is
temporary, only during the deployment. After all nodes are on 2.1, everything
is smooth.
On Friday, February 19, 2016 1:47 PM, daemeon reiydelle
wrote:
FYI, my observations were with native,
FYI, my observations were with native, not thrift.
*...*
*Daemeon C.M. ReiydelleUSA (+1) 415.501.0198London (+44) (0) 20 8144 9872*
On Fri, Feb 19, 2016 at 10:12 AM, Sotirios Delimanolis wrote:
> Does your cluster contain 24+ nodes or fewer?
>
> We did the same
Hi Mike,
Using batches with many rows puts heavy load on the coordinator and is
generally not considered a good practice. With 1500 rows in a batch with
different partition keys, even on a large cluster, you will eventually end up
waiting for every node in the cluster. This increases the
To me following three looks on higher side:
SSTable count: 1289
In order to reduce SSTable count see if you are compacting of not (If using
STCS). Is it possible to change this to LCS?
Number of keys (estimate): 345137664 (345M partition keys)
I don't have any suggestion about reducing this
The biggest change which *might* explain your behavior has to do with the
changes in memtable flushing between 2.0 and 2.1:
https://issues.apache.org/jira/browse/CASSANDRA-5549
However, the tpstats you posted shows no dropped mutations which would make
me more certain of this as the cause.
What
Does your cluster contain 24+ nodes or fewer?
We did the same upgrade on a smaller cluster of 5 nodes and we didn't see this
behavior. On the 24 node cluster, the timeouts only took effect once ~5-6-7+
nodes had been upgraded.
We're doing some more upgrades next week, trying different
May be unrelated, but I found highly variable latency (latency max) when on
the 2.1 code tree loading new data (and reading). Others found that G1 or
CMS do not make a difference. Some evidence that 8/12/16gb memory make no
difference. These were latencies in the 10-30 SECOND range. It did cause
Anuj,
So we originally started testing with Java8 + G1, however we were able to
reproduce the same results with the default CMS settings that ship in the
cassandra-env.sh from the Deb pkg. We didn't detect any large GC pauses
during the runs.
Query pattern during our testing was 100% writes,
I performed this exact update a few days ago, excepted clients were using
native protocol and it wen smoothly. So I think this might be thrift
related. No idea what is producing this though, just wanted to give the
info fwiw.
As a side note, unrelated to the issue, performances using native are a
>
> Alain, thanks for sharing! I'm confused why you do so many repetitive
> rsyncs. Just being cautious or is there another reason? Also, why do you
> have --delete-before when you're copying data to a temp (assumed empty)
> directory?
Since they are immutable I do a first sync while
Please find below the graph plotted out of cassandra-stress test output log.
While the columnar data took 36 mins to insert 20m records, the JSON format
data was loaded in under 10 mins. The tests were carried on bare-metal 4 node
cluster with 16-core CPU and 120GB memory (8GB Heap) backed by
11 matches
Mail list logo