Re: Assassinate or decommission?

2019-07-30 Thread Rhys Campbell
Are you sure it says to use assassinate as the first resort? Definately not
the case

Rahul Reddy  schrieb am Di., 30. Juli 2019, 12:05:

> Thanks Rhys,
>
> I was always using nodetool decommission when I do single node sofar. Not
> sure why datastax doc mention to use first attempt as assassinate.  Have
> you seen any issues like gossip information stayed for longer time using
> decommission for removing DC?
>
> On Tue, Jul 30, 2019, 5:43 AM Rhys Campbell
>  wrote:
>
>> The advice is to only use assassinate when all else fails. Decommission
>> will make sure any data that needs to be streamed elsewhere will be.
>>
>> Generally decommission > removenode > assassinate is the recommended
>> attempt order
>>
>> https://thelastpickle.com/blog/2018/09/18/assassinate.html
>>
>> Rahul Reddy  schrieb am Di., 30. Juli 2019,
>> 11:18:
>>
>>> Hello,
>>>
>>> While removing old data center any specific reason to use assassinate
>>> instead of decommission ?
>>>
>>> https://docs.datastax.com/en/dse/5.1/dse-admin/datastax_enterprise/operations/opsDecomissionDC.html
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>


Re: Assassinate or decommission?

2019-07-30 Thread Rhys Campbell
The advice is to only use assassinate when all else fails. Decommission
will make sure any data that needs to be streamed elsewhere will be.

Generally decommission > removenode > assassinate is the recommended
attempt order

https://thelastpickle.com/blog/2018/09/18/assassinate.html

Rahul Reddy  schrieb am Di., 30. Juli 2019, 11:18:

> Hello,
>
> While removing old data center any specific reason to use assassinate
> instead of decommission ?
>
> https://docs.datastax.com/en/dse/5.1/dse-admin/datastax_enterprise/operations/opsDecomissionDC.html
>
>
>
>
>
>
>
>


Re: Breaking up major compacted Sstable with TWCS

2019-07-12 Thread Rhys Campbell
https://docs.datastax.com/en/dse/5.1/dse-admin/datastax_enterprise/tools/toolsSStables/toolsSSTableSplit.html

Leon Zaruvinsky  schrieb am Fr., 12. Juli 2019,
00:06:

> Hi,
>
> We are switching a table to run using TWCS. However, after running the
> alter statement, we ran a major compaction without understanding the
> implications.
>
> Now, while new sstables are properly being created according to the time
> window, there is a giant sstable sitting around waiting for expiration.
>
> Is there a way we can break it up again?  Running the alter statement
> again doesn’t seem to be touching it.
>
> Thanks,
> Leon
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


Re: Need help on dealing with Cassandra robustness and zombie data

2019-07-01 Thread Rhys Campbell
#1 Set the cassandra service to not auto-start.
#2 Longer gc_grace time would help
#3 Rebootstrap?

If the node doesn't come back within gc_grace,_seconds, remove the node,
wipe it, and bootstrap it again.

https://docs.datastax.com/en/archived/cassandra/2.0/cassandra/dml/dml_about_deletes_c.html



yuping wang  schrieb am Mo., 1. Juli 2019, 13:33:

> Hi all,
>
>   Sorry for the interruption. But I need help.
>
>
>Due to specific reasons of our use case,  we have gc grace on the order
> of 10 minutes instead of default 10 days. Since we have a large amount of
> nodes in our Cassandra fleet, not surprisingly, we encounter occasionally
>  node status going from up to down and up again. The problem is when the
> down node rejoins the cluster after 15 minutes, it automatically adds
> already deleted data back and causing zombie data.
>
> our questions:
>
>1. Is there a way to not allow a down node to rejoin the cluster?
>2. or is there a way to configure rejoining node not adding stale data
>back regardless of how long the node is down before rejoining
>3. or is there a way to auto clean up the data when rejoining ?
>
> We know adding those data back is a conservative approach to avoid data
> loss but in our specific case, we are not worried about deleted data being
> revived we don’t have such use case. We really need a non-defaul option
> to never add back deleted data on rejoining nodes.
>
> this functionality will ultimately be a deciding factor on whether we can
> continue with Cassandra.
>
>
> Thanks again,
>


Re: Restore from EBS onto different cluster

2019-06-28 Thread Rhys Campbell
Sstableloader is probably your best option

Ayub M  schrieb am Fr., 28. Juni 2019, 08:37:

> Hello, I have a cluster with 3 nodes - say cluster1 on AWS EC2 instances.
> The cluster is up and running, took snapshot of the keyspaces volume.
>
> Now I want to restore few tables/keyspaces from the snapshot volumes, so I
> created another cluster say cluster2 and attached the snapshot volumes on
> to the new cluster's ec2 nodes. Cluster2 is not starting bcz the system
> keyspace in the snapshot taken was having cluster name as cluster1 and the
> cluster on which it is being restored is cluster2. How do I do a restore in
> this case? I do not want to do any modifications to the existing cluster.
>
> Also when I do restore do I need to think about the token ranges of the
> old and new cluster's mapping?
>
> Regards,
> Ayub
>


Re: Cluster schema version choosing

2019-05-21 Thread Rhys Campbell
I'd hazzard a guess that the uuid contains a datetime component

Aleksey Korolkov  schrieb am Di., 21. Mai 2019, 09:36:

> Thanks for the feedback.
> I also think that node choose like "last wins" but I could not find any
> timestamp of schema creation in system tables.
> Hope this is not the order of an element in Map or List)
>
>
> On Tue, 21 May 2019 at 02:58, Stefan Miklosovic <
> stefan.mikloso...@instaclustr.com> wrote:
>
>> My guess is that the "latest" schema would be chosen but I am
>> definitely interested in in-depth explanation.
>>
>> On Tue, 21 May 2019 at 00:28, Alexey Korolkov 
>> wrote:
>> >
>> > Hello team,
>> > In some circumstances, my cluster was split onto two schema versions
>> > (half on one version, and rest on another)
>> > In the process of resolving this issue, I restarted some nodes.
>> > Eventually, nodes migrated to one schema, but it was not clear why they
>> choose exactly this version of schema?
>> > I haven't found any explainings of the factor on which they picking
>> schema version,
>> > please help me to find out the algorithm of choosing schema or classes
>> in source code responsible for this.
>> >
>> >
>> >
>> >
>> >
>> > --
>> > Sincerely yours,  Korolkov Aleksey
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>>
>
> --
> *Sincerely yours,  **Korolkov Aleksey*
>


Re: nodetool repair failing with "Validation failed in /X.X.X.X

2019-05-06 Thread Rhys Campbell
Hello Shalom,

Someone already tried a rolling restart of Cassandra. I will probably try
rebooting the OS.

Repair seems to work if you do it a keyspace at a time.

Thanks for your input.

Rhys

On Sun, May 5, 2019 at 2:14 PM shalom sagges  wrote:

> Hi Rhys,
>
> I encountered this error after adding new SSTables to a cluster and
> running nodetool refresh (v3.0.12).
> The refresh worked, but after starting repairs on the cluster, I got the
> "Validation failed in /X.X.X.X" error on the remote DC.
> A rolling restart solved the issue for me.
>
> Hope this helps!
>
>
>
> On Sat, May 4, 2019 at 3:58 PM Rhys Campbell
>  wrote:
>
>>
>> > Hello,
>> >
>> > I’m having issues running repair on an Apache Cassandra Cluster. I’m
>> getting "Failed creating a merkle tree“ errors on the replication partner
>> nodes. Anyone have any experience of this? I am running 2.2.13.
>> >
>> > Further details here…
>> https://issues.apache.org/jira/projects/CASSANDRA/issues/CASSANDRA-15109?filter=allopenissues
>> >
>> > Best,
>> >
>> > Rhys
>>
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>>


nodetool repair failing with "Validation failed in /X.X.X.X

2019-05-04 Thread Rhys Campbell


> Hello,
> 
> I’m having issues running repair on an Apache Cassandra Cluster. I’m getting 
> "Failed creating a merkle tree“ errors on the replication partner nodes. 
> Anyone have any experience of this? I am running 2.2.13.
> 
> Further details here… 
> https://issues.apache.org/jira/projects/CASSANDRA/issues/CASSANDRA-15109?filter=allopenissues
> 
> Best,
> 
> Rhys


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org