>
>
> Not sure I follow you. 4 sstables is the minimum compaction look for
> (by default).
> If there is 30 sstables of ~20MB sitting there because compaction is
> behind, you
> will compact those 30 sstables together (unless there is not enough space
> for
> that and considering you haven't changed the max compaction threshold (32
> by
> default)). And you can increase max threshold.
> Don't get me wrong, I'm not pretending this works better than it does, but
> let's not pretend either that it's worth than it is.
>
>
Sorry, I am not trying to pretend anything or blow it out of proportions.
Just reacting to what I see.

This is what I see after some stress testing of some pretty decent HW.

81     Up     Normal  181.6 GB        8.33%   Token(bytes[30])

82     Up     Normal  501.43 GB       8.33%   Token(bytes[313230])

83     Up     Normal  248.07 GB       8.33%   Token(bytes[313437])

84     Up     Normal  349.64 GB       8.33%   Token(bytes[313836])

85     Up     Normal  511.55 GB       8.33%   Token(bytes[323336])

86     Up     Normal  654.93 GB       8.33%   Token(bytes[333234])

87     Up    Normal  534.77 GB       8.33%   Token(bytes[333939])

88     Up   Normal  525.88 GB       8.33%   Token(bytes[343739])

89     Up     Normal  476.6 GB        8.33%   Token(bytes[353730])

90     Up     Normal  424.89 GB       8.33%   Token(bytes[363635])

91     Up     Normal  338.14 GB       8.33%   Token(bytes[383036])

92     Up     Normal  546.95 GB       8.33%   Token(bytes[6a])

.81 has been exposed to a full compaction. It had ~370GB before that and the
resulting sstable is 165GB.
The other nodes has only been doing minor compactions

I think this is a problem.
You are of course free to disagree.

I do however recommend doing a simulation on potential worst case scenarios
if many of the buckets end up with 3 sstables and don't compact for a while.
The disk space requirements  get pretty bad even without getting into
theoretical worst cases.

Regards,
Terje

Reply via email to