> Wide rows? How wide? How many rows per partition, typically and at the
> extreme? how many clustering columns?
Yes, wide rows with deletions of old data.
Number of keys (estimate): 909428
How I can calculate rows per partition via nodetool/jmx?
~ From 100 to 5,000,000.
I know its
Hello!
We have a cluster of 25 c3.4xlarge nodes (16 cores, 32 GiB) with attached 1.5
TB 4000 PIOPS EBS drive.
Sometimes one or two nodes user cpu spikes to 100%, load average to 20-30 -
read requests drops of.
Only restart of this cassandra services helps.
Please advice.
One big table with wide
Hey,
What about compactions count when that is happening?
J.
> On Feb 12, 2016, at 3:06 AM, Skvazh Roman wrote:
>
> Hello!
> We have a cluster of 25 c3.4xlarge nodes (16 cores, 32 GiB) with attached 1.5
> TB 4000 PIOPS EBS drive.
> Sometimes one or two nodes user cpu
After disabling binary, gossip, thrift node blocks on 16 read stages and
[iadmin@ip-10-0-25-46 ~]$ nodetool tpstats
Pool NameActive Pending Completed Blocked All
time blocked
MutationStage 0 0 19587002 0
0
There is 1-4 compactions at that moment.
We have many tombstones, which does not removed.
DroppableTombstoneRatio is 5-6 (greater than 1)
> On 12 Feb 2016, at 15:53, Julien Anguenot wrote:
>
> Hey,
>
> What about compactions count when that is happening?
>
> J.
>
>
At the time when the load is high and you have to restart, do you see any
pending compactions when using `nodetool compactionstats`?
Possible to see a `nodetool compactionstats` taken *when* the load is too high?
Have you checked the size of your SSTables for that big table? Any large ones
in
> Does the load decrease and the node answers requests “normally” when you do
> disable auto-compaction? You actually see pending compactions on nodes having
> high load correct?
Nope.
> All seems legit here. Using G1 GC?
Yes
Problems also occurred on nodes without pending compactions.
>
If you positive this is not compaction related I would:
1. check disk IOPs and latency on the EBS volume. (dstat)
2. turn GC log on in cassandra-env.sh and use jstat to see what is
happening to your HEAP.
I have been asking about compactions initially because if you having one (1)
big
I have disabled autocompaction and stop it on highload node.
Freezes all nodes sequentially, 2-6 simultaneously.
Heap is 8Gb. gc_grace is 86400
All sstables is about 200-300 Mb.
$ nodetool compactionstats
pending tasks: 14
$ dstat -lvnr 10
---load-avg--- ---procs--- --memory-usage-
> On Feb 12, 2016, at 9:24 AM, Skvazh Roman wrote:
>
> I have disabled autocompaction and stop it on highload node.
Does the load decrease and the node answers requests “normally” when you do
disable auto-compaction? You actually see pending compactions on nodes having
high
Wide rows? How wide? How many rows per partition, typically and at the
extreme? how many clustering columns?
When you restart the node does it revert to completely normal response?
Which release of Cassandra?
Does every node eventually hit this problem?
After a restart, how long before the
11 matches
Mail list logo