Did you change the RF or had a node down since you repaired last time ?

2012/11/8 Henrik Schröder <skro...@gmail.com>

> No, we're not using columns with TTL, and I performed a major compaction
> before the repair, so there shouldn't be vast amounts of tombstones moving
> around.
>
> And the increase happened during the repair, the nodes gained ~20-30GB
> each.
>
>
> /Henrik
>
>
>
> On Thu, Nov 8, 2012 at 12:40 PM, horschi <hors...@gmail.com> wrote:
>
>> Hi,
>>
>> is it possible that your repair is overrepairing due to any of the issues
>> discussed here:
>> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/repair-compaction-and-tombstone-rows-td7583481.html?
>>
>>
>> I've seen repair increasing the load on my cluster, but what you are
>> describing sounds like a lot to me.
>>
>> Does this increase happen due to repair entirely? Or was the load maybe
>> increasing gradually over the week and you just checked for the first time?
>>
>> cheers,
>> Christian
>>
>>
>>
>> On Thu, Nov 8, 2012 at 11:55 AM, Henrik Schröder <skro...@gmail.com>wrote:
>>
>>> Hi,
>>>
>>> We recently ran a major compaction across our cluster, which reduced the
>>> storage used by about 50%. This is fine, since we do a lot of updates to
>>> existing data, so that's the expected result.
>>>
>>> The day after, we ran a full repair -pr across the cluster, and when
>>> that finished, each storage node was at about the same size as before the
>>> major compaction. Why does that happen? What gets transferred to other
>>> nodes, and why does it suddenly take up a lot of space again?
>>>
>>> We haven't run repair -pr regularly, so is this just something that
>>> happens on the first weekly run, and can we expect a different result next
>>> week? Or does repair always cause the data to grow on each node? To me it
>>> just doesn't seem proportional?
>>>
>>>
>>> /Henrik
>>>
>>
>>
>

Reply via email to