>
> I wouldn't trivialize it, scheduling can end up dealing with more than a
> single repair. If theres 1000 keyspace/tables, with 400 nodes and 256
> vnodes on each thats a lot of repairs to plan out and keep track of and can
> easily cause heap allocation spikes if opted in.
>
> Chris

The current proposal never keeps track of more than a few hundred range
splits for a single table at a time, and nothing ever keeps state for the
entire 400 node  Compared to the load generated by actually repairing the
data, I actually do think it is trivial heap pressure.


Somewhat beside the point, I wasn't aware there were any 100 node +
clusters running with vnodes, if my math is correct they would be
excessively vulnerable to outages with that many vnodes and that many
nodes. Most of the large clusters I've heard of (100 nodes plus) are
running with single or at most 4 tokens per node.

Reply via email to