Hi Martin,

you can stop the anticompaction by roll restarting the nodes (not sure if
"nodetool stop COMPACTION" will actually stop anticompaction, I never
tried).

Note that this will leave your cluster with SSTables marked as repaired and
others that are not. These two types of SSTables will never be compacted
together, which can delay reclaiming disk space over time because
overwrites and tombstones won't get merged.
If you plan to stick with nodetool, leave the anticompaction running and
hope that it's just taking a long time because it's your first repair (if
it is your first repair).

Otherwise, and I obviously recommend that, if you choose to use Reaper, you
can stop right away the running anticompactions and prepare for Reaper.
Since Reaper won't trigger anticompactions, you'll have to mark your
SSTables back to unrepaired state so that all SSTables can be compacted
with each other in the future.
To that end, you'll need to use the sstablerepairedset
<https://docs.datastax.com/en/archived/cassandra/3.0/cassandra/tools/toolsSStabRepairedSet.html>
command line tool (ships with Cassandra) and follow the procedure (in a
nutshell, stop Cassandra, mark sstables as unrepaired, restart Cassandra).

Cheers,

-----------------
Alexander Dejanovski
France
@alexanderdeja

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com


On Wed, Jul 31, 2019 at 3:53 PM Martin Xue <[email protected]> wrote:

> Sorry ASAD, don't have chance, still bogged down with the production
> issue...
>
> On Wed, Jul 31, 2019 at 10:56 PM ZAIDI, ASAD A <[email protected]> wrote:
>
>> Did you get chance to look at tlp reaper tool i.e.
>> http://cassandra-reaper.io/
>>
>> It is pretty awesome – Thanks to TLP team.
>>
>>
>>
>>
>>
>>
>>
>> *From:* Martin Xue [mailto:[email protected]]
>> *Sent:* Wednesday, July 31, 2019 12:09 AM
>> *To:* [email protected]
>> *Subject:* Repair / compaction for 6 nodes, 2 DC cluster
>>
>>
>>
>> Hello,
>>
>>
>>
>> Good day. This is Martin.
>>
>>
>>
>> Can someone help me with the following query regarding Cassandra repair
>> and compaction?
>>
>>
>> Currently we have a large keyspace (keyspace_event) with 1TB of data (in
>> /var/lib/cassandra/data/keyspace_event);
>> There is a cluster with Datacenter 1 contains 3 nodes, Data center 2
>> containing 3 nodes; All together 6 nodes;
>>
>>
>> As part of maintenance, I run the repair on this keyspace with the
>> following command:
>>
>>
>> nodetool repair -pr --full keyspace_event;
>>
>>
>> now it has been run for 2 days. yes 2 days, when doing nodetool tpstats,
>> it shows there is a compaction running:
>>
>>
>> CompactionExecutor                1         1        5783732         0
>>               0
>>
>> nodetool compactionstats shows:
>>
>>
>> pending tasks: 6
>>                                     id               compaction type
>>           keyspace                                  table       completed
>>         total    unit   progress
>>   249ec5f1-b225-11e9-82bd-5b36ef02cadd   Anticompaction after repair
>> keyspace_event table_event   1916937740948   2048931045927   bytes
>> 93.56%
>>
>>
>>
>>
>> Now my questions are:
>> 1. why running repair (with primary range option, -pr, as I want to limit
>> the repair node by node), triggered the compaction running on other nodes?
>> 2. when I run the repair on the second node with nodetool repair -pr
>> --full keyspace_event; will the subsequent compaction run again on all the
>> 6 nodes?
>>
>> I want to know what are the best option to run the repair (full repair)
>> as we did not run it before, especially if it can take less time (in
>> current speed it will take 2 weeks to finish all).
>>
>> I am running Cassandra 3.0.14
>>
>> Any suggestions will be appreciated.
>>
>>
>>
>> Thanks
>>
>> Regards
>>
>> Martin
>>
>>
>>
>

Reply via email to