Thank you very much Jean. Since i don't have any constraints, as you said,
i will try copying the complete keyspace system node by node first and will
do nodetool refresh and see if it works.



On Thu, Jan 11, 2018 at 3:21 PM, Jean Carlo <jean.jeancar...@gmail.com>
wrote:

> Hello,
>
> Basically, every node has to have the same token range. So yes you have to
> play with initial_token having the same numbers of tokens per node like the
> cluster source. To save time and if you dont have any constraints about the
> name of the cluster etc. you can just copy and paste the complete keyspace
> system node by node.
>
> So you will have the same cluster( cluster name, confs, etc)
>
>
> Saludos
>
> Jean Carlo
>
> "The best way to predict the future is to invent it" Alan Kay
>
> On Thu, Jan 11, 2018 at 10:28 AM, Pradeep Chhetri <prad...@stashaway.com>
> wrote:
>
>> Hello Jean,
>>
>> I am running cassandra 3.11.1.
>>
>> Since i dont have much cassandra operations experience yet, I have a
>> follow-up question - how can i ensure the same token ranges distribution ?
>> Do i need to set initial_token configuration for each cassandra node ?
>>
>> Thank you for the quick response.
>>
>>
>>
>>
>>
>> On Thu, Jan 11, 2018 at 3:04 PM, Jean Carlo <jean.jeancar...@gmail.com>
>> wrote:
>>
>>> Hello Pradeep,
>>>
>>> Actually the key here is to know if your cluster has the same token
>>> ranges distribution. So it is not only the same size but also the same
>>> tokens match node by node, from cluster source to cluster destination. In
>>> that case, you can use nodetool refresh.So after copy all your sstable node
>>> by node, it would be enough to make nodetool refresh in every node to
>>> restore your data. You can also restart casandra instead of doing nodetool
>>> refresh. It will help you to avoid the compactions after refreshing.
>>>
>>>
>>> Saludos
>>>
>>> Jean Carlo
>>>
>>> "The best way to predict the future is to invent it" Alan Kay
>>>
>>> On Thu, Jan 11, 2018 at 9:58 AM, Pradeep Chhetri <prad...@stashaway.com>
>>> wrote:
>>>
>>>> Hello everyone,
>>>>
>>>> We are running cassandra cluster inside containers over Kubernetes. We
>>>> have a requirement where we need to restore a fresh new cluster with
>>>> existing snapshot on weekly basis.
>>>>
>>>> Currently, while doing it manually. i need to copy the snapshot folder
>>>> inside container and then run sstableloader utility to load those tables.
>>>>
>>>> Since the source and destination cluster size is equal, I was thinking
>>>> if there are some easy way to just copy and paste the complete data
>>>> directory by mapping the nodes one to one.
>>>>
>>>> Since i wasn't able to find documentation around other  backup
>>>> restoration methods apart from nodetool snapshot and sstableloader, I
>>>> haven't explored much. I recently came across this project -
>>>> https://github.com/Netflix/Priam but tried it yet.
>>>>
>>>> Would be very happy if i can get some ideas around various ways of
>>>> backup/restoration while running inside containers.
>>>>
>>>> Thank you
>>>>
>>>
>>>
>>
>

Reply via email to