Re: Unsuccessful back-up and restore with differing counts

2017-08-30 Thread Jai Bheemsen Rao Dhanwada
This is link gives me 404, can you please give me the correct link?

On Sat, May 13, 2017 at 10:51 AM, Surbhi Gupta 
wrote:

> Below link has the method u r looking for
> http://datascale.io/cloning-cassandra-clusters-fast-way/
>
> On Sat, May 13, 2017 at 9:49 AM srinivasarao daruna <
> sree.srin...@gmail.com> wrote:
>
>> I am using vnodes. Is there a documenation that you can suggest to
>> understand how to assign same tokens in new cluster.? I will try it again.
>>
>>
>> On May 13, 2017 12:32 PM, "Nitan Kainth"  wrote:
>>
>> As Jonathan mentioned, if you are using vnodes , you should back up
>> nodetool output and assign same token to nodes and then copy corresponding
>> sstables.
>>
>> If using, initial token, then assign same value for subsequent node.
>>
>> Sstable loader should work independently, not sure why you are getting
>> wrong counts for that
>>
>> Sent from my iPhone
>>
>> On May 13, 2017, at 9:34 AM, Jonathan Haddad  wrote:
>>
>> Did you create the nodes with the same tokens?
>>
>> On Sat, May 13, 2017 at 8:44 AM srinivasarao daruna <
>> sree.srin...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> We have a cassandra cluster built on Apache Cassandra 3.9 with 6 nodes
>>> and RF = 3. As part of re-building the cluster, we are testing the backup
>>> and restore strategy.
>>>
>>> We took the snapshot and uploaded the files to S3 and data has been
>>> saved the data with folder names (backup_folder1 - 6 for nodes 1 - 6).
>>> Created a new cluster with the same number of nodes, and copied the data
>>> from S3 and created the schema.
>>>
>>> *Strategy 1: (using nodetool refresh)*
>>> 1) Copied back the data from S3 into one machine each based on the
>>> folders created (backup_folder1  - 6 to 6 nodes)
>>> 2) and performed nodetool refresh on the cluster.
>>>
>>> Ran the count:
>>>
>>> Count on previous cluster: 12125800
>>> Count on new cluster: 10504780
>>>
>>> *Strategy 2: using sstableloader*
>>>
>>> 1) Copied back the data from S3 into one machine each based on the
>>> folders created (backup_folder1  - 6 to 6 nodes)
>>> 2) and performed sstableloader on each node.
>>>
>>> Ran the count:
>>>
>>> Count on previous cluster: 12125800
>>> Count on new cluster: 11705084
>>>
>>>
>>> Looking at the results, i have bit disappointed that neither of the
>>> approach resulted 100% restore for me.
>>> If there is an error in taking the backup, it should have not given
>>> different counts.
>>>
>>> Any ideas on successful back-up and restore strategies.? and what could
>>> ve gone wrong in my process.?
>>>
>>> Thank You,
>>> Regards,
>>> Srini
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>>
>>


Re: Unsuccessful back-up and restore with differing counts

2017-05-13 Thread Surbhi Gupta
Below link has the method u r looking for
http://datascale.io/cloning-cassandra-clusters-fast-way/

On Sat, May 13, 2017 at 9:49 AM srinivasarao daruna 
wrote:

> I am using vnodes. Is there a documenation that you can suggest to
> understand how to assign same tokens in new cluster.? I will try it again.
>
>
> On May 13, 2017 12:32 PM, "Nitan Kainth"  wrote:
>
> As Jonathan mentioned, if you are using vnodes , you should back up
> nodetool output and assign same token to nodes and then copy corresponding
> sstables.
>
> If using, initial token, then assign same value for subsequent node.
>
> Sstable loader should work independently, not sure why you are getting
> wrong counts for that
>
> Sent from my iPhone
>
> On May 13, 2017, at 9:34 AM, Jonathan Haddad  wrote:
>
> Did you create the nodes with the same tokens?
>
> On Sat, May 13, 2017 at 8:44 AM srinivasarao daruna <
> sree.srin...@gmail.com> wrote:
>
>> Hi,
>>
>> We have a cassandra cluster built on Apache Cassandra 3.9 with 6 nodes
>> and RF = 3. As part of re-building the cluster, we are testing the backup
>> and restore strategy.
>>
>> We took the snapshot and uploaded the files to S3 and data has been saved
>> the data with folder names (backup_folder1 - 6 for nodes 1 - 6).
>> Created a new cluster with the same number of nodes, and copied the data
>> from S3 and created the schema.
>>
>> *Strategy 1: (using nodetool refresh)*
>> 1) Copied back the data from S3 into one machine each based on the
>> folders created (backup_folder1  - 6 to 6 nodes)
>> 2) and performed nodetool refresh on the cluster.
>>
>> Ran the count:
>>
>> Count on previous cluster: 12125800
>> Count on new cluster: 10504780
>>
>> *Strategy 2: using sstableloader*
>>
>> 1) Copied back the data from S3 into one machine each based on the
>> folders created (backup_folder1  - 6 to 6 nodes)
>> 2) and performed sstableloader on each node.
>>
>> Ran the count:
>>
>> Count on previous cluster: 12125800
>> Count on new cluster: 11705084
>>
>>
>> Looking at the results, i have bit disappointed that neither of the
>> approach resulted 100% restore for me.
>> If there is an error in taking the backup, it should have not given
>> different counts.
>>
>> Any ideas on successful back-up and restore strategies.? and what could
>> ve gone wrong in my process.?
>>
>> Thank You,
>> Regards,
>> Srini
>>
>>
>>
>>
>>
>>
>>
>>
>
>
>


Re: Unsuccessful back-up and restore with differing counts

2017-05-13 Thread srinivasarao daruna
I am using vnodes. Is there a documenation that you can suggest to
understand how to assign same tokens in new cluster.? I will try it again.

On May 13, 2017 12:32 PM, "Nitan Kainth"  wrote:

As Jonathan mentioned, if you are using vnodes , you should back up
nodetool output and assign same token to nodes and then copy corresponding
sstables.

If using, initial token, then assign same value for subsequent node.

Sstable loader should work independently, not sure why you are getting
wrong counts for that

Sent from my iPhone

On May 13, 2017, at 9:34 AM, Jonathan Haddad  wrote:

Did you create the nodes with the same tokens?

On Sat, May 13, 2017 at 8:44 AM srinivasarao daruna 
wrote:

> Hi,
>
> We have a cassandra cluster built on Apache Cassandra 3.9 with 6 nodes and
> RF = 3. As part of re-building the cluster, we are testing the backup and
> restore strategy.
>
> We took the snapshot and uploaded the files to S3 and data has been saved
> the data with folder names (backup_folder1 - 6 for nodes 1 - 6).
> Created a new cluster with the same number of nodes, and copied the data
> from S3 and created the schema.
>
> *Strategy 1: (using nodetool refresh)*
> 1) Copied back the data from S3 into one machine each based on the folders
> created (backup_folder1  - 6 to 6 nodes)
> 2) and performed nodetool refresh on the cluster.
>
> Ran the count:
>
> Count on previous cluster: 12125800
> Count on new cluster: 10504780
>
> *Strategy 2: using sstableloader*
>
> 1) Copied back the data from S3 into one machine each based on the folders
> created (backup_folder1  - 6 to 6 nodes)
> 2) and performed sstableloader on each node.
>
> Ran the count:
>
> Count on previous cluster: 12125800
> Count on new cluster: 11705084
>
>
> Looking at the results, i have bit disappointed that neither of the
> approach resulted 100% restore for me.
> If there is an error in taking the backup, it should have not given
> different counts.
>
> Any ideas on successful back-up and restore strategies.? and what could ve
> gone wrong in my process.?
>
> Thank You,
> Regards,
> Srini
>
>


Re: Unsuccessful back-up and restore with differing counts

2017-05-13 Thread Nitan Kainth
As Jonathan mentioned, if you are using vnodes , you should back up nodetool 
output and assign same token to nodes and then copy corresponding sstables.

If using, initial token, then assign same value for subsequent node.

Sstable loader should work independently, not sure why you are getting wrong 
counts for that

Sent from my iPhone

> On May 13, 2017, at 9:34 AM, Jonathan Haddad  wrote:
> 
> Did you create the nodes with the same tokens?
> 
>> On Sat, May 13, 2017 at 8:44 AM srinivasarao daruna  
>> wrote:
>> Hi, 
>> 
>> We have a cassandra cluster built on Apache Cassandra 3.9 with 6 nodes and 
>> RF = 3. As part of re-building the cluster, we are testing the backup and 
>> restore strategy. 
>> 
>> We took the snapshot and uploaded the files to S3 and data has been saved 
>> the data with folder names (backup_folder1 - 6 for nodes 1 - 6).
>> Created a new cluster with the same number of nodes, and copied the data 
>> from S3 and created the schema.
>> 
>> Strategy 1: (using nodetool refresh)
>> 1) Copied back the data from S3 into one machine each based on the folders 
>> created (backup_folder1  - 6 to 6 nodes)
>> 2) and performed nodetool refresh on the cluster.
>> 
>> Ran the count:
>> 
>> Count on previous cluster: 12125800
>> Count on new cluster: 10504780
>> 
>> Strategy 2: using sstableloader
>> 
>> 1) Copied back the data from S3 into one machine each based on the folders 
>> created (backup_folder1  - 6 to 6 nodes)
>> 2) and performed sstableloader on each node.
>> 
>> Ran the count:
>> 
>> Count on previous cluster: 12125800
>> Count on new cluster: 11705084
>> 
>> 
>> Looking at the results, i have bit disappointed that neither of the approach 
>> resulted 100% restore for me. 
>> If there is an error in taking the backup, it should have not given 
>> different counts.
>> 
>> Any ideas on successful back-up and restore strategies.? and what could ve 
>> gone wrong in my process.?
>> 
>> Thank You,
>> Regards, 
>> Srini
>> 


Re: Unsuccessful back-up and restore with differing counts

2017-05-13 Thread Jonathan Haddad
Did you create the nodes with the same tokens?

On Sat, May 13, 2017 at 8:44 AM srinivasarao daruna 
wrote:

> Hi,
>
> We have a cassandra cluster built on Apache Cassandra 3.9 with 6 nodes and
> RF = 3. As part of re-building the cluster, we are testing the backup and
> restore strategy.
>
> We took the snapshot and uploaded the files to S3 and data has been saved
> the data with folder names (backup_folder1 - 6 for nodes 1 - 6).
> Created a new cluster with the same number of nodes, and copied the data
> from S3 and created the schema.
>
> *Strategy 1: (using nodetool refresh)*
> 1) Copied back the data from S3 into one machine each based on the folders
> created (backup_folder1  - 6 to 6 nodes)
> 2) and performed nodetool refresh on the cluster.
>
> Ran the count:
>
> Count on previous cluster: 12125800
> Count on new cluster: 10504780
>
> *Strategy 2: using sstableloader*
>
> 1) Copied back the data from S3 into one machine each based on the folders
> created (backup_folder1  - 6 to 6 nodes)
> 2) and performed sstableloader on each node.
>
> Ran the count:
>
> Count on previous cluster: 12125800
> Count on new cluster: 11705084
>
>
> Looking at the results, i have bit disappointed that neither of the
> approach resulted 100% restore for me.
> If there is an error in taking the backup, it should have not given
> different counts.
>
> Any ideas on successful back-up and restore strategies.? and what could ve
> gone wrong in my process.?
>
> Thank You,
> Regards,
> Srini
>
>


Fwd: Unsuccessful back-up and restore with differing counts

2017-05-13 Thread srinivasarao daruna
Hi,

We have a cassandra cluster built on Apache Cassandra 3.9 with 6 nodes and
RF = 3. As part of re-building the cluster, we are testing the backup and
restore strategy.

We took the snapshot and uploaded the files to S3 and data has been saved
the data with folder names (backup_folder1 - 6 for nodes 1 - 6).
Created a new cluster with the same number of nodes, and copied the data
from S3 and created the schema.

*Strategy 1: (using nodetool refresh)*
1) Copied back the data from S3 into one machine each based on the folders
created (backup_folder1  - 6 to 6 nodes)
2) and performed nodetool refresh on the cluster.

Ran the count:

Count on previous cluster: 12125800
Count on new cluster: 10504780

*Strategy 2: using sstableloader*

1) Copied back the data from S3 into one machine each based on the folders
created (backup_folder1  - 6 to 6 nodes)
2) and performed sstableloader on each node.

Ran the count:

Count on previous cluster: 12125800
Count on new cluster: 11705084


Looking at the results, i have bit disappointed that neither of the
approach resulted 100% restore for me.
If there is an error in taking the backup, it should have not given
different counts.

Any ideas on successful back-up and restore strategies.? and what could ve
gone wrong in my process.?

Thank You,
Regards,
Srini


Unsuccessful back-up and restore with differing counts

2017-05-13 Thread srinivasarao daruna
Hi,

We have a cassandra cluster built on Apache Cassandra 3.9 with 6 nodes and
RF = 3. As part of re-building the cluster, we are testing the backup and
restore strategy.

We took the snapshot and uploaded the files to S3 and data has been saved
the data with folder names (backup_folder1 - 6 for nodes 1 - 6).
Created a new cluster with the same number of nodes, and copied the data
from S3 and created the schema.

*Strategy 1: (using nodetool refresh)*
1) Copied back the data from S3 into one machine each based on the folders
created (backup_folder1  - 6 to 6 nodes)
2) and performed nodetool refresh on the cluster.

Ran the count:

Count on previous cluster: 12125800
Count on new cluster: 10504780

*Strategy 2: using sstableloader*

1) Copied back the data from S3 into one machine each based on the folders
created (backup_folder1  - 6 to 6 nodes)
2) and performed sstableloader on each node.

Ran the count:

Count on previous cluster: 12125800
Count on new cluster: 11705084


Looking at the results, i have bit disappointed that neither of the
approach resulted 100% restore for me.
If there is an error in taking the backup, it should have not given
different counts.

Any ideas on successful back-up and restore strategies.? and what could ve
gone wrong in my process.?

Thank You,
Regards,
Srini