Re: Cassandra cross dc replication row isolationCassandra cross dc replication row isolation

2019-05-07 Thread Avinash Mandava
Are you asking if writes are atomic at the partition level? If so yes. If
you have N columns in a simple k/v schema, and you send a write with X/N of
those columns set, all X will be updated at the same time wherever that
writes goes.

The CL thing is more about how tolerant you are to stale data, i.e. if you
write in one DC and you absolutely can't tolerate reads from a remote DC
showing stale data, you would have to write at EACH_QUORUM and read at
LOCAL_QUORUM. While I'm not one for blanket advice, and certainly you can
make the decision on this tradeoff, this is a last resort situation, one of
those "supported features" that you ought to be wary of, as it's a bit off
from the intended design/usage of the system.

On Tue, May 7, 2019 at 2:58 PM Rahul Singh 
wrote:

> Depends on the consistency level you are setting on write and read.
>
> What CL are you writing at and what CL are you reading at?
>
> The consistency level tells the coordinator when to send acknowledgement
> of a write and whether to cross DCs to confirm a write. It also tells the
> coordinator how many replicas to read and whether or not to cross  DCs to
> get consensus.
>
> Eg. Local_quorum is different from Quorum.
> Local_quorum guarantees Data was saved to a quorum of nodes on the DC on
> which the Coordinator accepted the write. Similarly it would only check
> nodes in that DC. Quorum would check across DCs in the whole cluster.
> On May 7, 2019, 12:11 PM -0500, Alexey Knyshev ,
> wrote:
>
> Hi there!
>
> Could someone please explain how Column Family would be replicated and
> "visible / readable" in the following scenario? Having multiple
> geo-distributed datacenters with significant latency (up to 100ms RTT).
> Let's name two of them A and B and consider the following 2 cases:
>
>1. Cassandra client X inserts row into Column Family (CF) with Primary
>Key = PK (all cells are set - no nulls possible). Write coordinator is in
>dc A. All cells in this write should have the same writetime. For
>simplicity let's assume that Cassandra coordinator node sets writetime.
>After some amount of time (< RTT) client Y reads whole row (select * ...)
>from the same CF with same PK talking to coordinator node from, another dc
>(B). Is it possible that client Y will get some cells as NULLs, I mean, is
>it possible to read some already replicated cells and for others get NULLs,
>or does Cassandra guarantee row-level isolation / atomic write for that
>insert? Assume that row (all cells for same PK will never be updated /
>deleted afterwards.
>2. Same as in p.1 but after first write at PK same client (X) updates
>some columns for the same PK. Will be this update isolated / atomically
>written and eventually visible in another dc. Will client see isolated
>state as it was before write or after it?
>
> Thanks in advance!
>
>
> --
> linkedin.com/profile
> 
>
> github.com/alexeyknyshev
> bitbucket.org/alexeyknyshev
>
>

-- 
www.vorstella.com
408 691 8402


Re: Cassandra cross dc replication row isolationCassandra cross dc replication row isolation

2019-05-07 Thread Rahul Singh
Depends on the consistency level you are setting on write and read.

What CL are you writing at and what CL are you reading at?

The consistency level tells the coordinator when to send acknowledgement of a 
write and whether to cross DCs to confirm a write. It also tells the 
coordinator how many replicas to read and whether or not to cross  DCs to get 
consensus.

Eg. Local_quorum is different from Quorum.
Local_quorum guarantees Data was saved to a quorum of nodes on the DC on which 
the Coordinator accepted the write. Similarly it would only check nodes in that 
DC. Quorum would check across DCs in the whole cluster.
On May 7, 2019, 12:11 PM -0500, Alexey Knyshev , 
wrote:
> Hi there!
>
> Could someone please explain how Column Family would be replicated and 
> "visible / readable" in the following scenario? Having multiple 
> geo-distributed datacenters with significant latency (up to 100ms RTT). Let's 
> name two of them A and B and consider the following 2 cases:
>
> 1. Cassandra client X inserts row into Column Family (CF) with Primary Key = 
> PK (all cells are set - no nulls possible). Write coordinator is in dc A. All 
> cells in this write should have the same writetime. For simplicity let's 
> assume that Cassandra coordinator node sets writetime. After some amount of 
> time (< RTT) client Y reads whole row (select * ...) from the same CF with 
> same PK talking to coordinator node from, another dc (B). Is it possible that 
> client Y will get some cells as NULLs, I mean, is it possible to read some 
> already replicated cells and for others get NULLs, or does Cassandra 
> guarantee row-level isolation / atomic write for that insert? Assume that row 
> (all cells for same PK will never be updated / deleted afterwards.
> 2. Same as in p.1 but after first write at PK same client (X) updates some 
> columns for the same PK. Will be this update isolated / atomically written 
> and eventually visible in another dc. Will client see isolated state as it 
> was before write or after it?
>
> Thanks in advance!
>
>
> --
> linkedin.com/profile
>
> github.com/alexeyknyshev
> bitbucket.org/alexeyknyshev


Re: Cassandra taking very long to start and server under heavy load

2019-05-07 Thread Carl Mueller
You may have encountered the same behavior we have encountered going from
2.1 --> 2.2 a week or so ago.

We also have multiple data dirs. Hm.

In our case, we will purge the data of the big offending table.

HOw big are your nodes?

On Tue, May 7, 2019 at 1:40 AM Evgeny Inberg  wrote:

> Still no resolution for this. Did anyone else encounter same behavior?
>
> On Thu, May 2, 2019 at 1:54 PM Evgeny Inberg  wrote:
>
>> Yes, sstable upgraded on each node.
>>
>> On Thu, 2 May 2019, 13:39 Nick Hatfield 
>> wrote:
>>
>>> Just curious but, did you make sure to run the sstable upgrade after you
>>> completed the move from 2.x to 3.x ?
>>>
>>>
>>>
>>> *From:* Evgeny Inberg [mailto:evg...@gmail.com]
>>> *Sent:* Thursday, May 02, 2019 1:31 AM
>>> *To:* user@cassandra.apache.org
>>> *Subject:* Re: Cassandra taking very long to start and server under
>>> heavy load
>>>
>>>
>>>
>>> Using a sigle data disk.
>>>
>>> Also, it is performing mostly heavy read operations according to the
>>> metrics cillected.
>>>
>>> On Wed, 1 May 2019, 20:14 Jeff Jirsa  wrote:
>>>
>>> Do you have multiple data disks?
>>>
>>> Cassandra 6696 changed behavior with multiple data disks to make it
>>> safer in the situation that one disk fails . It may be copying data to the
>>> right places on startup, can you see if sstables are being moved on disk?
>>>
>>> --
>>>
>>> Jeff Jirsa
>>>
>>>
>>>
>>>
>>> On May 1, 2019, at 6:04 AM, Evgeny Inberg  wrote:
>>>
>>> I have upgraded a Cassandra cluster from version 2.0.x to 3.11.4 going
>>> trough 2.1.14.
>>>
>>> After the upgrade, noticed that each node is taking about 10-15 minutes
>>> to start, and server is under a very heavy load.
>>>
>>> Did some digging around and got view leads from the debug log.
>>>
>>> Messages like:
>>>
>>> *Keyspace.java:351 - New replication settings for keyspace system_auth -
>>> invalidating disk boundary caches *
>>>
>>> *CompactionStrategyManager.java:380 - Recreating compaction strategy -
>>> disk boundaries are out of date for system_auth.roles.*
>>>
>>>
>>>
>>> This is repeating for all keyspaces.
>>>
>>>
>>>
>>> Any suggestion to check and what might cause this to happen on every
>>> start?
>>>
>>>
>>>
>>> Thanks!e
>>>
>>>


Re: 2019 manual deletion of sstables

2019-05-07 Thread Carl Mueller
(repair would be done after all the nodes with obviously deletable sstables
were deleted)
(we may then do a purge program anyway)
(this would seem to get rid of 60-90% of the purgable data without
incurring a big round of tombstones and compaction)

On Tue, May 7, 2019 at 12:05 PM Carl Mueller 
wrote:

> Last my googling had some people doing this back in 2.0.x days, and that
> you could do it if you brought a node down, removed the desired sstable
> #'s  artifacts (Data/Index/etc), and then started up. Probably also with a
> clearing of the saved caches.
>
> A decent-ish amount of data (256G) in a 2.1 cluster we are trying to
> upgrade has about 60-70% of the data that could be purged.
>
> The data has only partition keys (no column key) and is only written once.
> So the sstables that are expired don't have data leaking across other
> sstables.
>
> So can we do this:
>
> 1) bring down node
> 2) remove an sstable with obviously old data (we use sstablemetadata tools
> to doublecheck)
> 3) clear saved caches
> 4) start back up
>
> And then repair afterward?
>
> The table is STCS. We are trying to avoid writing a purge program and
> prompting a full compaction.
>


Cassandra cross dc replication row isolationCassandra cross dc replication row isolation

2019-05-07 Thread Alexey Knyshev
Hi there!

Could someone please explain how Column Family would be replicated and
"visible / readable" in the following scenario? Having multiple
geo-distributed datacenters with significant latency (up to 100ms RTT).
Let's name two of them A and B and consider the following 2 cases:

   1. Cassandra client X inserts row into Column Family (CF) with Primary
   Key = PK (all cells are set - no nulls possible). Write coordinator is in
   dc A. All cells in this write should have the same writetime. For
   simplicity let's assume that Cassandra coordinator node sets writetime.
   After some amount of time (< RTT) client Y reads whole row (select * ...)
   from the same CF with same PK talking to coordinator node from, another dc
   (B). Is it possible that client Y will get some cells as NULLs, I mean, is
   it possible to read some already replicated cells and for others get NULLs,
   or does Cassandra guarantee row-level isolation / atomic write for that
   insert? Assume that row (all cells for same PK will never be updated /
   deleted afterwards.
   2. Same as in p.1 but after first write at PK same client (X) updates
   some columns for the same PK. Will be this update isolated / atomically
   written and eventually visible in another dc. Will client see isolated
   state as it was before write or after it?

Thanks in advance!


-- 
linkedin.com/profile


github.com/alexeyknyshev
bitbucket.org/alexeyknyshev


2019 manual deletion of sstables

2019-05-07 Thread Carl Mueller
Last my googling had some people doing this back in 2.0.x days, and that
you could do it if you brought a node down, removed the desired sstable
#'s  artifacts (Data/Index/etc), and then started up. Probably also with a
clearing of the saved caches.

A decent-ish amount of data (256G) in a 2.1 cluster we are trying to
upgrade has about 60-70% of the data that could be purged.

The data has only partition keys (no column key) and is only written once.
So the sstables that are expired don't have data leaking across other
sstables.

So can we do this:

1) bring down node
2) remove an sstable with obviously old data (we use sstablemetadata tools
to doublecheck)
3) clear saved caches
4) start back up

And then repair afterward?

The table is STCS. We are trying to avoid writing a purge program and
prompting a full compaction.


Re: Is There a Way To Proactively Monitor Reads Returning No Data Due to Consistency Level?

2019-05-07 Thread Jeff Jirsa

Short answer is no, because missing consistency isn’t an error and there’s no 
way to know you’ve missed data without reading at ALL, and if it were ok to 
read at ALL you’d already be doing it (it’s not ok for most apps).

> On May 7, 2019, at 8:05 AM, Fd Habash  wrote:
> 
> Typically, when a read is submitted to C*, it may complete  with  …
> No errors & returns expected data
> Errors out with UnavailableException

Can also error with timeout 

> No error & returns zero rows on first attempt, but returned on subsequent 
> runs.

Can also return stale or incomplete data, not just no data

>  
> The third scenario happens as a result of cluster entropy specially during 
> unexpected outages affecting on-premise or cloud infrastructures.
>  
> Typical scenario …
> Multiple nodes fail in the cluster
> Node replaced via bootstrapping

You must run repair among the remaining replicas before replacement if you want 
to maintain consistency guarantees

> Row is in Cassandra, but client hits nodes that do not have the data yet. 
> Gets zero rows. Row is retrieved on third or forth attempts and read repairs 
> takes care of it.
> Eventually, repair is run and issue is fixed.
>  
> Digging in Cassandra metrics, I’ve found ‘cassandra.unavailables.count’. 
> Looks like this metrics captures scenario ' UnavailableException’, however.
>  
> I have also read the Yelp article describing a metric they called 
> ‘underreplicated keyspaces’. These are keyspace ranges that will fail to 
> satisfy reads/write at a certain CL due to insufficient endpoints. If my 
> understanding is correct, this is also measuring scenario 2.
>  
> Tying to find a metric to capture scenario 3 above. Is this possible at all?
>  
>  
>  
> 
> Thank you
>  


Is There a Way To Proactively Monitor Reads Returning No Data Due to Consistency Level?

2019-05-07 Thread Fd Habash
Typically, when a read is submitted to C*, it may complete  with  …
1. No errors & returns expected data
2. Errors out with UnavailableException
3. No error & returns zero rows on first attempt, but returned on subsequent 
runs.

The third scenario happens as a result of cluster entropy specially during 
unexpected outages affecting on-premise or cloud infrastructures.

Typical scenario …
a) Multiple nodes fail in the cluster
b) Node replaced via bootstrapping
c) Row is in Cassandra, but client hits nodes that do not have the data yet. 
Gets zero rows. Row is retrieved on third or forth attempts and read repairs 
takes care of it.
d) Eventually, repair is run and issue is fixed.

Digging in Cassandra metrics, I’ve found ‘cassandra.unavailables.count’. Looks 
like this metrics captures scenario ' UnavailableException’, however.

I have also read the Yelp article describing a metric they called 
‘underreplicated keyspaces’. These are keyspace ranges that will fail to 
satisfy reads/write at a certain CL due to insufficient endpoints. If my 
understanding is correct, this is also measuring scenario 2. 

Tying to find a metric to capture scenario 3 above. Is this possible at all?




Thank you



Re: TWCS sstables not dropping even though all data is expired

2019-05-07 Thread Mike Torra
Thx for the tips Jeff, I'm definitely going to start using table level TTLs
(not sure why I didn't before), and I'll take a look at the tombstone
compaction subproperties

On Mon, May 6, 2019 at 10:43 AM Jeff Jirsa  wrote:

> Fwiw if you enable the tombstone compaction subproperties, you’ll compact
> away most of the other data in those old sstables (but not the partition
> that’s been manually updated)
>
> Also table level TTLs help catch this type of manual manipulation -
> consider adding it if appropriate.
>
> --
> Jeff Jirsa
>
>
> On May 6, 2019, at 7:29 AM, Mike Torra 
> wrote:
>
> Compaction settings:
> ```
> compaction = {'class':
> 'org.apache.cassandra.db.compaction.TimeWindowCompactionStrategy',
> 'compaction_window_size': '6', 'compaction_window_unit': 'HOURS',
> 'max_threshold': '32', 'min_threshold': '4'}
> ```
> read_repair_chance is 0, and I don't do any repairs because (normally)
> everything has a ttl. It does seem like Jeff is right that a manual
> insert/update without a ttl is what caused this, so I know how to resolve
> it and prevent it from happening again.
>
> Thx again for all the help guys, I appreciate it!
>
>
> On Fri, May 3, 2019 at 11:21 PM Jeff Jirsa  wrote:
>
>> Repairs work fine with TWCS, but having a non-expiring row will prevent
>> tombstones in newer sstables from being purged
>>
>> I suspect someone did a manual insert/update without a ttl and that
>> effectively blocks all other expiring cells from being purged.
>>
>> --
>> Jeff Jirsa
>>
>>
>> On May 3, 2019, at 7:57 PM, Nick Hatfield 
>> wrote:
>>
>> Hi Mike,
>>
>>
>>
>> If you will, share your compaction settings. More than likely, your issue
>> is from 1 of 2 reasons:
>> 1. You have read repair chance set to anything other than 0
>>
>> 2. You’re running repairs on the TWCS CF
>>
>>
>>
>> Or both….
>>
>>
>>
>> *From:* Mike Torra [mailto:mto...@salesforce.com.INVALID
>> ]
>> *Sent:* Friday, May 03, 2019 3:00 PM
>> *To:* user@cassandra.apache.org
>> *Subject:* Re: TWCS sstables not dropping even though all data is expired
>>
>>
>>
>> Thx for the help Paul - there are definitely some details here I still
>> don't fully understand, but this helped me resolve the problem and know
>> what to look for in the future :)
>>
>>
>>
>> On Fri, May 3, 2019 at 12:44 PM Paul Chandler  wrote:
>>
>> Hi Mike,
>>
>>
>>
>> For TWCS the sstable can only be deleted when all the data has expired in
>> that sstable, but you had a record without a ttl in it, so that sstable
>> could never be deleted.
>>
>>
>>
>> That bit is straight forward, the next bit I remember reading somewhere
>> but can’t find it at the moment to confirm my thinking.
>>
>>
>>
>> An sstable can only be deleted if it is the earliest sstable. I think
>> this is due to the fact that deleting later sstables may expose old
>> versions of the data stored in the stuck sstable which had been superseded.
>> For example, if there was a tombstone in a later sstable for the non TTLed
>> record causing the problem in this instance. Then deleting that sstable
>> would cause that deleted data to reappear. (Someone please correct me if I
>> have this wrong)
>>
>>
>>
>> Because sstables in different time buckets are never compacted together,
>> this problem only goes away when you did the major compaction.
>>
>>
>>
>> This would happen on all replicas of the data, hence the reason you this
>> problem on 3 nodes.
>>
>>
>>
>> Thanks
>>
>>
>>
>> Paul
>>
>> www.redshots.com
>>
>>
>>
>> On 3 May 2019, at 15:35, Mike Torra 
>> wrote:
>>
>>
>>
>> This does indeed seem to be a problem of overlapping sstables, but I
>> don't understand why the data (and number of sstables) just continues to
>> grow indefinitely. I also don't understand why this problem is only
>> appearing on some nodes. Is it just a coincidence that the one rogue test
>> row without a ttl is at the 'root' sstable causing the problem (ie, from
>> the output of `sstableexpiredblockers`)?
>>
>>
>>
>> Running a full compaction via `nodetool compact` reclaims the disk space,
>> but I'd like to figure out why this happened and prevent it. Understanding
>> why this problem would be isolated the way it is (ie only one CF even
>> though I have a few others that share a very similar schema, and only some
>> nodes) seems like it will help me prevent it.
>>
>>
>>
>>
>>
>> On Thu, May 2, 2019 at 1:00 PM Paul Chandler  wrote:
>>
>> Hi Mike,
>>
>>
>>
>> It sounds like that record may have been deleted, if that is the case
>> then it would still be shown in this sstable, but the deleted tombstone
>> record would be in a later sstable. You can use nodetool getsstables to
>> work out which sstables contain the data.
>>
>>
>>
>> I recommend reading The Last Pickle post on this:
>> http://thelastpickle.com/blog/2016/12/08/TWCS-part1.html the sections
>> towards the bottom of this post may well explain why the sstable is not
>> being deleted.
>>
>>
>>
>> Thanks
>>
>>
>>
>> Paul
>>
>> www.redshots.com
>>
>>
>>
>> On 2 May 2019, at 16:08, Mike 

Re: Corrupted sstables

2019-05-07 Thread Paul Chandler
Roy, We spent along time trying to fix it, but didn’t find a solution, it was a 
test cluster, so we ended up rebuilding the cluster, rather than spending 
anymore time trying to fix the corruption. We have worked out what had caused 
it, so were happy it wasn’t going to occur in production. Sorry that is not 
much help, but I am not even sure it is the same issue you have. 

Paul



> On 7 May 2019, at 07:14, Roy Burstein  wrote:
> 
> I can say that it happens now as well ,currently no node has been 
> added/removed . 
> Corrupted sstables are usually the index files and in some machines the 
> sstable even does not exist on the filesystem.
> On one machine I was able to dump the sstable to dump file without any issue  
> . Any idea how to tackle this issue ? 
>  
> 
> On Tue, May 7, 2019 at 12:32 AM Paul Chandler  > wrote:
> Roy,
> 
> I have seen this exception before when a column had been dropped then re 
> added with the same name but a different type. In particular we dropped a 
> column and re created it as static, then had this exception from the old 
> sstables created prior to the ddl change.
> 
> Not sure if this applies in your case.
> 
> Thanks 
> 
> Paul
> 
>> On 6 May 2019, at 21:52, Nitan Kainth > > wrote:
>> 
>> can Disk have bad sectors? fccheck or something similar can help.
>> 
>> Long shot: repair or any other operation conflicting. Would leave that to 
>> others.
>> 
>> On Mon, May 6, 2019 at 3:50 PM Roy Burstein > > wrote:
>> It happens on the same column families and they have the same ddl (as 
>> already posted) . I did not check it after cleanup 
>> .
>> 
>> On Mon, May 6, 2019, 23:43 Nitan Kainth > > wrote:
>> This is strange, never saw this. does it happen to same column family?
>> 
>> Does it happen after cleanup?
>> 
>> On Mon, May 6, 2019 at 3:41 PM Roy Burstein > > wrote:
>> Yes.
>> 
>> On Mon, May 6, 2019, 23:23 Nitan Kainth > > wrote:
>> Roy,
>> 
>> You mean all nodes show corruption when you add a node to cluster??
>> 
>> 
>> Regards,
>> Nitan
>> Cell: 510 449 9629 
>> 
>> On May 6, 2019, at 2:48 PM, Roy Burstein > > wrote:
>> 
>>> It happened  on all the servers in the cluster every time I have added node
>>> .
>>> This is new cluster nothing was upgraded here , we have a similar cluster
>>> running on C* 2.1.15 with no issues .
>>> We are aware to the scrub utility just it reproduce every time we added
>>> node to the cluster .
>>> 
>>> We have many tables there
> 



Re: Cassandra taking very long to start and server under heavy load

2019-05-07 Thread Evgeny Inberg
Still no resolution for this. Did anyone else encounter same behavior?

On Thu, May 2, 2019 at 1:54 PM Evgeny Inberg  wrote:

> Yes, sstable upgraded on each node.
>
> On Thu, 2 May 2019, 13:39 Nick Hatfield 
> wrote:
>
>> Just curious but, did you make sure to run the sstable upgrade after you
>> completed the move from 2.x to 3.x ?
>>
>>
>>
>> *From:* Evgeny Inberg [mailto:evg...@gmail.com]
>> *Sent:* Thursday, May 02, 2019 1:31 AM
>> *To:* user@cassandra.apache.org
>> *Subject:* Re: Cassandra taking very long to start and server under
>> heavy load
>>
>>
>>
>> Using a sigle data disk.
>>
>> Also, it is performing mostly heavy read operations according to the
>> metrics cillected.
>>
>> On Wed, 1 May 2019, 20:14 Jeff Jirsa  wrote:
>>
>> Do you have multiple data disks?
>>
>> Cassandra 6696 changed behavior with multiple data disks to make it safer
>> in the situation that one disk fails . It may be copying data to the right
>> places on startup, can you see if sstables are being moved on disk?
>>
>> --
>>
>> Jeff Jirsa
>>
>>
>>
>>
>> On May 1, 2019, at 6:04 AM, Evgeny Inberg  wrote:
>>
>> I have upgraded a Cassandra cluster from version 2.0.x to 3.11.4 going
>> trough 2.1.14.
>>
>> After the upgrade, noticed that each node is taking about 10-15 minutes
>> to start, and server is under a very heavy load.
>>
>> Did some digging around and got view leads from the debug log.
>>
>> Messages like:
>>
>> *Keyspace.java:351 - New replication settings for keyspace system_auth -
>> invalidating disk boundary caches *
>>
>> *CompactionStrategyManager.java:380 - Recreating compaction strategy -
>> disk boundaries are out of date for system_auth.roles.*
>>
>>
>>
>> This is repeating for all keyspaces.
>>
>>
>>
>> Any suggestion to check and what might cause this to happen on every
>> start?
>>
>>
>>
>> Thanks!e
>>
>>


Re: Corrupted sstables

2019-05-07 Thread Roy Burstein
I can say that it happens now as well ,currently no node has been
added/removed .
Corrupted sstables are usually the index files and in some machines the
sstable even does not exist on the filesystem.
On one machine I was able to dump the sstable to dump file without any
issue  . Any idea how to tackle this issue ?


On Tue, May 7, 2019 at 12:32 AM Paul Chandler  wrote:

> Roy,
>
> I have seen this exception before when a column had been dropped then re
> added with the same name but a different type. In particular we dropped a
> column and re created it as static, then had this exception from the old
> sstables created prior to the ddl change.
>
> Not sure if this applies in your case.
>
> Thanks
>
> Paul
>
> On 6 May 2019, at 21:52, Nitan Kainth  wrote:
>
> can Disk have bad sectors? fccheck or something similar can help.
>
> Long shot: repair or any other operation conflicting. Would leave that to
> others.
>
> On Mon, May 6, 2019 at 3:50 PM Roy Burstein 
> wrote:
>
>> It happens on the same column families and they have the same ddl (as
>> already posted) . I did not check it after cleanup
>> .
>>
>> On Mon, May 6, 2019, 23:43 Nitan Kainth  wrote:
>>
>>> This is strange, never saw this. does it happen to same column family?
>>>
>>> Does it happen after cleanup?
>>>
>>> On Mon, May 6, 2019 at 3:41 PM Roy Burstein 
>>> wrote:
>>>
 Yes.

 On Mon, May 6, 2019, 23:23 Nitan Kainth  wrote:

> Roy,
>
> You mean all nodes show corruption when you add a node to cluster??
>
>
> Regards,
> Nitan
> Cell: 510 449 9629
>
> On May 6, 2019, at 2:48 PM, Roy Burstein 
> wrote:
>
> It happened  on all the servers in the cluster every time I have added
> node
> .
> This is new cluster nothing was upgraded here , we have a similar
> cluster
> running on C* 2.1.15 with no issues .
> We are aware to the scrub utility just it reproduce every time we added
> node to the cluster .
>
> We have many tables there
>
>
>