Re: Need help with incremental repair

2017-10-30 Thread Blake Eggleston
Ah cool, I didn't realize reaper did that.

On October 30, 2017 at 1:29:26 PM, Paulo Motta (pauloricard...@gmail.com) wrote:

> This is also the case for full repairs, if I'm not mistaken. Assuming I'm not 
> missing something here, that should mean that he shouldn't need to mark 
> sstables as unrepaired? 

That's right, but he mentioned that he is using reaper which uses 
subrange repair if I'm not mistaken, which doesn't do anti-compaction. 
So in that case he should probably mark data as unrepaired when no 
longer using incremental repair. 

2017-10-31 3:52 GMT+11:00 Blake Eggleston : 
>> Once you run incremental repair, your data is permanently marked as 
>> repaired 
> 
> This is also the case for full repairs, if I'm not mistaken. I'll admit I'm 
> not as familiar with the quirks of repair in 2.2, but prior to 
> 4.0/CASSANDRA-9143, any global repair ends with an anticompaction that marks 
> sstables as repaired. Looking at the RepairRunnable class, this does seem to 
> be the case. Assuming I'm not missing something here, that should mean that 
> he shouldn't need to mark sstables as unrepaired? 

- 
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org 
For additional commands, e-mail: user-h...@cassandra.apache.org 



Re: Need help with incremental repair

2017-10-30 Thread Paulo Motta
> This is also the case for full repairs, if I'm not mistaken. Assuming I'm not 
> missing something here, that should mean that he shouldn't need to mark 
> sstables as unrepaired?

That's right, but he mentioned that he is using reaper which uses
subrange repair if I'm not mistaken, which doesn't do anti-compaction.
So in that case he should probably mark data as unrepaired when no
longer using incremental repair.

2017-10-31 3:52 GMT+11:00 Blake Eggleston :
>> Once you run incremental repair, your data is permanently marked as
>> repaired
>
> This is also the case for full repairs, if I'm not mistaken. I'll admit I'm
> not as familiar with the quirks of repair in 2.2, but prior to
> 4.0/CASSANDRA-9143, any global repair ends with an anticompaction that marks
> sstables as repaired. Looking at the RepairRunnable class, this does seem to
> be the case. Assuming I'm not missing something here, that should mean that
> he shouldn't need to mark sstables as unrepaired?

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Need help with incremental repair

2017-10-30 Thread Blake Eggleston
> Once you run incremental repair, your data is permanently marked as repaired

This is also the case for full repairs, if I'm not mistaken. I'll admit I'm not 
as familiar with the quirks of repair in 2.2, but prior to 4.0/CASSANDRA-9143, 
any global repair ends with an anticompaction that marks sstables as repaired. 
Looking at the RepairRunnable class, this does seem to be the case. Assuming 
I'm not missing something here, that should mean that he shouldn't need to mark 
sstables as unrepaired?


Re: Need help with incremental repair

2017-10-30 Thread kurt greaves
Yes mark them as unrepaired first. You can get sstablerepairedset from
source if you need (probably make sure you get the correct branch/tag).
It's just a shell script so as long as you have C* installed in a
default/canonical location it should work.
https://github.com/apache/cassandra/blob/trunk/tools/bin/sstablerepairedset​


Re: Need help with incremental repair

2017-10-29 Thread Aiman Parvaiz
Thanks Blake and Paulo for the response.

Yes, the idea is to go back to non incremental repairs. I am waiting for all 
the "anticompaction after repair" activities to complete and in my 
understanding( thanks to Blake for the explanation ), I can run a full repair 
on that KS and then get back to my non incremental repair regiment.


I assume that I should mark the SSTs to un repaired first and then run a full 
repair?

Also, although I am installing Cassandra from package dsc22 on my CentOS 7 I 
couldn't find sstable tools installed, need to figure that out too.


From: Paulo Motta <pauloricard...@gmail.com>
Sent: Sunday, October 29, 2017 1:56:38 PM
To: user@cassandra.apache.org
Subject: Re: Need help with incremental repair

> Assuming the situation is just "we accidentally ran incremental repair", you 
> shouldn't have to do anything. It's not going to hurt anything

Once you run incremental repair, your data is permanently marked as
repaired, and is no longer compacted with new non-incrementally
repaired data. This can cause read fragmentation and prevent deleted
data from being purged. If you ever run incremental repair and want to
switch to non-incremental repair, you should manually mark your
repaired SSTables as not-repaired with the sstablerepairedset tool.

2017-10-29 3:05 GMT+11:00 Blake Eggleston <beggles...@apple.com>:
> Hey Aiman,
>
> Assuming the situation is just "we accidentally ran incremental repair", you
> shouldn't have to do anything. It's not going to hurt anything. Pre-4.0
> incremental repair has some issues that can cause a lot of extra streaming,
> and inconsistencies in some edge cases, but as long as you're running full
> repairs before gc grace expires, everything should be ok.
>
> Thanks,
>
> Blake
>
>
> On October 28, 2017 at 1:28:42 AM, Aiman Parvaiz (ai...@steelhouse.com)
> wrote:
>
> Hi everyone,
>
> We seek your help in a issue we are facing in our 2.2.8 version.
>
> We have 24 nodes cluster spread over 3 DCs.
>
> Initially, when the cluster was in a single DC we were using The Last Pickle
> reaper 0.5 to repair it with incremental repair set to false. We added 2
> more DCs. Now the problem is that accidentally on one of the newer DCs we
> ran nodetool repair  without realizing that for 2.2 the default
> option is incremental.
>
> I am not seeing any errors in the logs till now but wanted to know what
> would be the best way to handle this situation. To make things a little more
> complicated, the node on which we triggered this repair is almost out of
> disk and we had to restart C* on it.
>
> I can see a bunch of "anticompaction after repair" under Opscenter Activites
> across various nodes in the 3 DCs.
>
>
> Any help, suggestion would be appreciated.
>
> Thanks
>
>

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Need help with incremental repair

2017-10-29 Thread Paulo Motta
> Assuming the situation is just "we accidentally ran incremental repair", you 
> shouldn't have to do anything. It's not going to hurt anything

Once you run incremental repair, your data is permanently marked as
repaired, and is no longer compacted with new non-incrementally
repaired data. This can cause read fragmentation and prevent deleted
data from being purged. If you ever run incremental repair and want to
switch to non-incremental repair, you should manually mark your
repaired SSTables as not-repaired with the sstablerepairedset tool.

2017-10-29 3:05 GMT+11:00 Blake Eggleston :
> Hey Aiman,
>
> Assuming the situation is just "we accidentally ran incremental repair", you
> shouldn't have to do anything. It's not going to hurt anything. Pre-4.0
> incremental repair has some issues that can cause a lot of extra streaming,
> and inconsistencies in some edge cases, but as long as you're running full
> repairs before gc grace expires, everything should be ok.
>
> Thanks,
>
> Blake
>
>
> On October 28, 2017 at 1:28:42 AM, Aiman Parvaiz (ai...@steelhouse.com)
> wrote:
>
> Hi everyone,
>
> We seek your help in a issue we are facing in our 2.2.8 version.
>
> We have 24 nodes cluster spread over 3 DCs.
>
> Initially, when the cluster was in a single DC we were using The Last Pickle
> reaper 0.5 to repair it with incremental repair set to false. We added 2
> more DCs. Now the problem is that accidentally on one of the newer DCs we
> ran nodetool repair  without realizing that for 2.2 the default
> option is incremental.
>
> I am not seeing any errors in the logs till now but wanted to know what
> would be the best way to handle this situation. To make things a little more
> complicated, the node on which we triggered this repair is almost out of
> disk and we had to restart C* on it.
>
> I can see a bunch of "anticompaction after repair" under Opscenter Activites
> across various nodes in the 3 DCs.
>
>
> Any help, suggestion would be appreciated.
>
> Thanks
>
>

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Need help with incremental repair

2017-10-28 Thread Blake Eggleston
Hey Aiman,

Assuming the situation is just "we accidentally ran incremental repair", you 
shouldn't have to do anything. It's not going to hurt anything. Pre-4.0 
incremental repair has some issues that can cause a lot of extra streaming, and 
inconsistencies in some edge cases, but as long as you're running full repairs 
before gc grace expires, everything should be ok.

Thanks,

Blake


On October 28, 2017 at 1:28:42 AM, Aiman Parvaiz (ai...@steelhouse.com) wrote:

Hi everyone,

We seek your help in a issue we are facing in our 2.2.8 version.

We have 24 nodes cluster spread over 3 DCs.

Initially, when the cluster was in a single DC we were using The Last Pickle 
reaper 0.5 to repair it with incremental repair set to false. We added 2 more 
DCs. Now the problem is that accidentally on one of the newer DCs we ran 
nodetool repair  without realizing that for 2.2 the default option is 
incremental. 

I am not seeing any errors in the logs till now but wanted to know what would 
be the best way to handle this situation. To make things a little more 
complicated, the node on which we triggered this repair is almost out of disk 
and we had to restart C* on it.

I can see a bunch of "anticompaction after repair" under Opscenter Activites 
across various nodes in the 3 DCs.



Any help, suggestion would be appreciated.

Thanks




Need help with incremental repair

2017-10-28 Thread Aiman Parvaiz
Hi everyone,

We seek your help in a issue we are facing in our 2.2.8 version.

We have 24 nodes cluster spread over 3 DCs.

Initially, when the cluster was in a single DC we were using The Last Pickle 
reaper 0.5 to repair it with incremental repair set to false. We added 2 more 
DCs. Now the problem is that accidentally on one of the newer DCs we ran 
nodetool repair  without realizing that for 2.2 the default option is 
incremental.

I am not seeing any errors in the logs till now but wanted to know what would 
be the best way to handle this situation. To make things a little more 
complicated, the node on which we triggered this repair is almost out of disk 
and we had to restart C* on it.

I can see a bunch of "anticompaction after repair" under Opscenter Activites 
across various nodes in the 3 DCs.


Any help, suggestion would be appreciated.

Thanks