Hi,
Any improvement on this?
2 ideas coming to my mind:
Yes, we are storing timeseries-like binary blobs where data is heavily
> TTLed (essentially the entire column family is incrementally refreshed with
> completely new data every few days)
This looks to me like a good fit for TWCS
Thank you for your reply. To address your points:
- We are not running repairs
- Yes, we are storing timeseries-like binary blobs where data is heavily
TTLed (essentially the entire column family is incrementally refreshed with
completely new data every few days)
- I have tried with increasing
Are you running repairs ?
You may try:
- increase concurrentçcompaction to 8 (max in 2.1.x)
- increase compaction_throupghput to more than 16MB/s (48 may be a good start)
What kind of data are you storing in theses tables ? timeseries ?
2016-03-21 23:37 GMT+01:00 Gianluca Borello
Thank you for your reply, definitely appreciate the tip on the compressed
size.
I understand your point, in fact whenever we bootstrap a new node we see a
huge number of pending compactions (in the order of thousands), and they
usually decrease steadily until they reach 0 in just a few hours.
> We added a bunch of new nodes to a cluster (2.1.13) and everything went fine,
> except for the number of pending compactions that is staying quite high on a
> subset of the new nodes. Over the past 3 days, the pending compactions have
> never been less than ~130 on such nodes, with peaks of
On Mon, Mar 21, 2016 at 12:50 PM, Gianluca Borello
wrote:
>
> - It's also interesting to notice how the compaction in the previous
> example is trying to compact ~37 GB, which is essentially the whole size of
> the column family message_data1 as reported by cfstats:
>
Also
On Mon, Mar 21, 2016 at 2:15 PM, Alain RODRIGUEZ wrote:
>
> What hardware do you use? Can you see it running at the limits (CPU /
> disks IO)? Is there any error on system logs, are disks doing fine?
>
>
Nodes are c3.2xlarge instances on AWS. The nodes are relatively idle,
Hi, thanks for the detailed information, it is useful.
SSTables in each level: [43/4, 92/10, 125/100, 0, 0, 0, 0, 0, 0]
Looks like compaction is not doing so hot indeed.
What hardware do you use? Can you see it running at the limits (CPU / disks
IO)? Is there any error on system logs, are