Re: Restore from EBS onto different cluster

2019-07-04 Thread Alain RODRIGUEZ
Hello

I would not use the SSTable Loader if there is big data sets... It's too
slow and somewhat making thing more complex I think.
I love EBS for restoring nodes, you have things even much easier than with
instance stores an incredibly efficient restore are possible.

Also when I do restore do I need to think about the token ranges of the old
> and new cluster's mapping?
>

In case you have doubt with the procedure (this keeping cluster1 name as in
solution 1, 2 and 3 below. Option 4 is somewhat different), you can check
this out:
https://thelastpickle.com/blog/2018/04/03/cassandra-backup-and-restore-aws-ebs.html#the-copypaste-approach-on-aws-ec2-with-ebs-volumes
.

Now I want to restore few tables/keyspaces from the snapshot volumes, so I
> created another cluster say cluster2 and attached the snapshot volumes on
> to the new cluster's ec2 nodes
>

For cluster2 what's annoying you is 'system.peers' and 'system.local' table
(this last one hold the name of the cluster). The information of the
cluster name and other nodes lives there.

Cluster2 is not starting bcz the system keyspace in the snapshot taken was
> having cluster name as cluster1 and the cluster on which it is being
> restored is cluster2. How do I do a restore in this case?
>

Solutions I can think of right now:
1 - Use a dedicated private network (VPC), not impacting prod for sure,
then go for cluster1 name for this cluster and apply the standard method to
attach the EBS (ensure you map tokens, DC, Racks, etc) correctly and start
cassandra. Just attach the disk, start cassandra and check the data is
there :).
2 - As mentioned above, you can keep 'cluster1' name and prevent new nodes
to talk to old nodes in the security group as well (but make no mistake...
less safe than a new VPC I'd say)
3 - Same idea, isolate groups of machines with a firewall (IPTABLES, ufw,
...)

If you really need/prefer to use 'cluster2' name:
4 - (Advanced solution, might request more work):
- Remove the 2 system tables mentioned above. You can pick what to do with
'system_auth' as well...
- Create a new cluster (instances, configuration management, etc),
configured with the same tokens than the original cluster.
- attach the EBS you created to the RIGHT new node (tokens - DCs, racks
might have to be mirroring the original ones, not to sure here) and cluster
the name you like.
- Start seed nodes (with explicit tokens = old tokens)
- Start remaining nodes.

I'm answering without support and might be missing a step or 2, but I have
been doing this successfully already and these are the distinct solutions I
used :).

Hope any of these helps,
C*heers,
---
Alain Rodriguez - al...@thelastpickle.com
France / Spain

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com


Le ven. 28 juin 2019 à 09:02, Oleksandr Shulgin <
oleksandr.shul...@zalando.de> a écrit :

> On Fri, Jun 28, 2019 at 8:37 AM Ayub M  wrote:
>
>> Hello, I have a cluster with 3 nodes - say cluster1 on AWS EC2 instances.
>> The cluster is up and running, took snapshot of the keyspaces volume.
>>
>> Now I want to restore few tables/keyspaces from the snapshot volumes, so
>> I created another cluster say cluster2 and attached the snapshot volumes on
>> to the new cluster's ec2 nodes. Cluster2 is not starting bcz the system
>> keyspace in the snapshot taken was having cluster name as cluster1 and the
>> cluster on which it is being restored is cluster2. How do I do a restore in
>> this case? I do not want to do any modifications to the existing cluster.
>>
>
> Hi,
>
> I would try to use the same cluster name just to be able to restore it and
> ensure that nodes of cluster2 cannot talk to cluster1 by the means of
> setting up Security Groups, for example.
>
> Also when I do restore do I need to think about the token ranges of the
>> old and new cluster's mapping?
>>
>
> Absolutely.  For a successful restore you must ensure that you restore a
> snapshot from a rack (Avalability Zone, if you're using the EC2Snitch) into
> the same rack in the new cluster.
>
> Regards,
> --
> Alex
>
>


Re: Restore from EBS onto different cluster

2019-06-28 Thread Oleksandr Shulgin
On Fri, Jun 28, 2019 at 8:37 AM Ayub M  wrote:

> Hello, I have a cluster with 3 nodes - say cluster1 on AWS EC2 instances.
> The cluster is up and running, took snapshot of the keyspaces volume.
>
> Now I want to restore few tables/keyspaces from the snapshot volumes, so I
> created another cluster say cluster2 and attached the snapshot volumes on
> to the new cluster's ec2 nodes. Cluster2 is not starting bcz the system
> keyspace in the snapshot taken was having cluster name as cluster1 and the
> cluster on which it is being restored is cluster2. How do I do a restore in
> this case? I do not want to do any modifications to the existing cluster.
>

Hi,

I would try to use the same cluster name just to be able to restore it and
ensure that nodes of cluster2 cannot talk to cluster1 by the means of
setting up Security Groups, for example.

Also when I do restore do I need to think about the token ranges of the old
> and new cluster's mapping?
>

Absolutely.  For a successful restore you must ensure that you restore a
snapshot from a rack (Avalability Zone, if you're using the EC2Snitch) into
the same rack in the new cluster.

Regards,
--
Alex


Re: Restore from EBS onto different cluster

2019-06-28 Thread Rhys Campbell
Sstableloader is probably your best option

Ayub M  schrieb am Fr., 28. Juni 2019, 08:37:

> Hello, I have a cluster with 3 nodes - say cluster1 on AWS EC2 instances.
> The cluster is up and running, took snapshot of the keyspaces volume.
>
> Now I want to restore few tables/keyspaces from the snapshot volumes, so I
> created another cluster say cluster2 and attached the snapshot volumes on
> to the new cluster's ec2 nodes. Cluster2 is not starting bcz the system
> keyspace in the snapshot taken was having cluster name as cluster1 and the
> cluster on which it is being restored is cluster2. How do I do a restore in
> this case? I do not want to do any modifications to the existing cluster.
>
> Also when I do restore do I need to think about the token ranges of the
> old and new cluster's mapping?
>
> Regards,
> Ayub
>


Restore from EBS onto different cluster

2019-06-28 Thread Ayub M
Hello, I have a cluster with 3 nodes - say cluster1 on AWS EC2 instances.
The cluster is up and running, took snapshot of the keyspaces volume.

Now I want to restore few tables/keyspaces from the snapshot volumes, so I
created another cluster say cluster2 and attached the snapshot volumes on
to the new cluster's ec2 nodes. Cluster2 is not starting bcz the system
keyspace in the snapshot taken was having cluster name as cluster1 and the
cluster on which it is being restored is cluster2. How do I do a restore in
this case? I do not want to do any modifications to the existing cluster.

Also when I do restore do I need to think about the token ranges of the old
and new cluster's mapping?

Regards,
Ayub