Hi. After I've run token-ranged repair from node at 12.5.13.125 with

nodetool repair -full -st ${start_tokens[i]} -et ${end_tokens[i]}

on every token range, I got this node load:

--  Address       Load       Tokens  Owns   Rack
UN  12.5.13.141   23.94 GB   256     32.3%  rack1
DN  12.5.13.125   34.71 GB   256     31.8%  rack1
UN  12.5.13.46    29.01 GB   512     58.1%  rack1
UN  12.5.13.228   41.17 GB   512     58.5%  rack1
UN  12.5.13.34    45.93 GB   512     59.8%  rack1
UN  12.5.13.82    42.05 GB   512     59.4%  rack1

Then I've run partitioner-range repair from the same node with

nodetool repair -full -pr

And unexpectedly I got such a different load:

--  Address       Load       Tokens  Owns   Rack
UN  12.5.13.141   22.93 GB   256     32.3%  rack1
UN  12.5.13.125   30.94 GB   256     31.8%  rack1
UN  12.5.13.46    27.38 GB   512     58.1%  rack1
UN  12.5.13.228   39.51 GB   512     58.5%  rack1
UN  12.5.13.34    41.58 GB   512     59.8%  rack1
UN  12.5.13.82    33.9 GB    512     59.4%  rack1

What are posible reasons of such load decrease after last repair? Maybe
some compaction, that were not done after token-ranged repairs? But at
12.5.13.82 gone about 8GB!

Additional info:

   - There were no writes to db during these periods.
   - All repair operations completed without errors, exceptions or fails.
   - Before the first repair I've done sstablescrub on every node -- maybe
   this gives a clue?
   - cassandra version is 3.0.8

-- 

Oleg Krayushkin

Reply via email to