Re: Full repair results in uneven data distribution

2021-03-16 Thread Bowen Song
That sounds like the combined results from the anti-compaction and the 
size amplification from the default SizeTieredCompactionStrategy. If you 
keep repeating those steps, the disk usage will eventually stop growing. 
Of course, that's not an excuse to keep repeating it.


To fix this (if you really need to reclaim those disk space), you can 
shutdown a node, run "sstablerepairedset --really-set --is-unrepaired" 
on all SSTable files, then restart the node and run "nodetool compact 
-s". Repeat these steps on every node, including the seemingly 
unaffected node, one by one.


To avoid this issue in the future, I'd recommend you avoid causing 
Cassandra to do anti-compaction during repairs. You can achieve that by 
specifying a DC in the "nodetool repair" command, such as "nodetool 
repair -full -dc DC1". This will work even you only have one DC. You 
should also look into automation tools, such as Cassandra Reaper 
, for running repairs.



On 16/03/2021 07:11, Inquistive allen wrote:

Hello Team,

Sorry for this might be a simple question.

I was working on Cassandra 2.1.14

Node1 -- 4.5 mb data
Node2 -- 5.3 mb data
Node3 -- 4.9 mb data

Node3 was down since 90 days.
I brought it up and it joined the cluster.
To sync data I ran nodetool repair --full

Repair was successful...however just to be sure that the data is in 
sync..I re-ran the repair process..expecting the process to exit and 
hence prove that there is nothing to repair.


Each time I ran full repair.. repair did run completely and 
successfully...it didn't exit immediately as I expected.



Running it 4 times I suddenly saw this

Node 1 -- 43 mb
Node2 -- 42 mb
Node3 -- 6 mb

I was clueless of this data growth on node1 and node2.

May anyone pls help me understand why this happened.

To bring back things to normal, I tried running nodetool repair -pr on 
all the hosts one after another.. repair ran successfully...


Still there was difference in data size on 3 nodes.

Hence I decided to decommission each node  and re-add them one after 
another..


I did that.  The data size is now

Just wanted to understand is there anyway my data is lost..why was 
there a difference in data size after I ran full repair multiple times


Thanks




Full repair results in uneven data distribution

2021-03-16 Thread Inquistive allen
Hello Team,

Sorry for this might be a simple question.

I was working on Cassandra 2.1.14

Node1 -- 4.5 mb data
Node2 -- 5.3 mb data
Node3 -- 4.9 mb data

Node3 was down since 90 days.
I brought it up and it joined the cluster.
To sync data I ran nodetool repair --full

Repair was successful...however just to be sure that the data is in sync..I
re-ran the repair process..expecting the process to exit and hence prove
that there is nothing to repair.

Each time I ran full repair.. repair did run completely and
successfully...it didn't exit immediately as I expected.


Running it 4 times I suddenly saw this

Node 1 -- 43 mb
Node2 -- 42 mb
Node3 -- 6 mb

I was clueless of this data growth on node1 and node2.

May anyone pls help me understand why this happened.

To bring back things to normal, I tried running nodetool repair -pr on all
the hosts one after another.. repair ran successfully...

Still there was difference in data size on 3 nodes.

Hence I decided to decommission each node  and re-add them one after
another..

I did that.  The data size is now

Just wanted to understand is there anyway my data is lost..why was there a
difference in data size after I ran full repair multiple times

Thanks