Unsubscribe

2021-05-06 Thread vishal kharjul
Hello,

I need help with how to unsubscribe this mailing list. I tried sending
email to user-unsubscr...@cassandra.apache.com and it didn't work. Please
advise.

Thanks and Regards,
Vishal


Re: What is the way to scale down Cassandra/Kubernetes cluster from 3 to 1 nodes using cass-operator

2020-07-07 Thread vishal kharjul
Thanks John. I wasn't aware of.

@Manu,

As John said it's listed in limitations of operator on below link -

https://docs.datastax.com/en/cass-operator/doc/cass-operator/cassOperatorReleaseNotes.html



On Tue, Jul 7, 2020, 10:48 AM John Sanda  wrote:

> Cass Operator currently does not support scaling down.
>
> Thanks
>
> John
>
> On Thu, Jul 2, 2020 at 1:02 PM Manu Chadha 
> wrote:
>
>> Hi
>>
>>
>>
>> I changed the file and applied it but the new configuration hasn’t got
>> applied.
>>
>>
>>
>>
>>
>> metadata:
>>
>>   name: dc1
>>
>> spec:
>>
>>   clusterName: cluster1
>>
>>   serverType: cassandra
>>
>>   serverVersion: "3.11.6"
>>
>>   managementApiAuth:
>>
>> insecure: {}
>>
>>   size: 1 ß made change here
>>
>>   storageConfig:
>>
>> ...
>>
>>
>>
>> kubectl apply -n cass-operator -f ./cass-dc-2-nodes.yaml
>>
>>
>>
>> manuchadha25@cloudshell:~ (copper-frame-262317)$ kubectl get all -n 
>> cass-operator
>>
>> NAME READY   STATUSRESTARTS   AGE
>>
>> pod/cass-operator-5f8cdf99fc-9c5g4   1/1 Running   0  2d20h
>>
>> pod/cluster1-dc1-default-sts-0   2/2 Running   0  2d20h
>>
>> pod/cluster1-dc1-default-sts-1   2/2 Running   0  9h
>>
>> pod/cluster1-dc1-default-sts-2   2/2 Running   0  9h
>>
>>
>>
>> NAME  TYPE   CLUSTER-IP  
>> EXTERNAL-IP PORT(S) AGE
>>
>> service/cass-operator-metrics ClusterIP  10.51.243.147   
>>   8383/TCP,8686/TCP   2d20h
>>
>> service/cassandra-loadbalancerLoadBalancer   10.51.240.24
>> 34.91.214.233   9042:30870/TCP  2d
>>
>> service/cassandradatacenter-webhook-service   ClusterIP  10.51.243.86
>>   443/TCP 2d20h
>>
>> service/cluster1-dc1-all-pods-service ClusterIP  None
>> 2d20h
>>
>> service/cluster1-dc1-service  ClusterIP  None
>>   9042/TCP,8080/TCP   2d20h
>>
>> service/cluster1-seed-service ClusterIP  None
>> 2d20h
>>
>>
>>
>> NAMEREADY   UP-TO-DATE   AVAILABLE   AGE
>>
>> deployment.apps/cass-operator   1/1 11   2d20h
>>
>>
>>
>> NAME   DESIRED   CURRENT   READY   AGE
>>
>> replicaset.apps/cass-operator-5f8cdf99fc   1 1 1   2d20h
>>
>>
>>
>> NAMEREADY   AGE
>>
>> statefulset.apps/cluster1-dc1-default-sts   3/3 2d20h ß still 3/3
>>
>> manuchadha25@cloudshell:~ (copper-frame-262317)$
>>
>>
>>
>> thanks
>>
>> Manu
>>
>> Sent from Mail <https://go.microsoft.com/fwlink/?LinkId=550986> for
>> Windows 10
>>
>>
>>
>> *From: *vishal kharjul 
>> *Sent: *02 July 2020 12:46
>> *To: *user@cassandra.apache.org
>> *Subject: *Re: What is the way to scale down Cassandra/Kubernetes
>> cluster from 3 to 1 nodes using cass-operator
>>
>>
>>
>> Hello Many,
>>
>>
>>
>> I tried scale up and it's just need size parameter change . So try same
>> for scale down. Just change the size parameter of CassandraDatacenter CRD
>> and apply it again. Basically same step which you took to spinoff 3 node
>> with just the size parameter changed. Operator will bring down Cassandra
>> nodes accordingly.  No need to shut down or restart.
>>
>>
>>
>> Thanks and Regards,
>>
>> Vishal
>>
>> On Thu, Jul 2, 2020, 3:41 AM Oleksandr Shulgin <
>> oleksandr.shul...@zalando.de> wrote:
>>
>> On Thu, Jul 2, 2020 at 9:29 AM Manu Chadha 
>> wrote:
>>
>> Thanks Alex. Will give this a try. So I just change the yaml file and
>> hot-patch it or would I need to stop the cluster, delete it and make a new
>> one?
>>
>>
>>
>> I've no experience with this specific operator, but I expect that editing
>> the file and applying it using kubectl is the way to go, especially if you
>> don't want to lose your data.
>>
>>
>>
>> --
>>
>> Alex
>>
>>
>>
>>
>>
>
>
> --
>
> - John
>


Re: Safely enabling internode TLS encryption on live cassandra cluster

2020-07-06 Thread vishal kharjul
Agree. We were planning same change and tested multiple scenarios with
conclusion that it needs downtime to be on safer side. With right
automation in place implementation can be made faster but not without
downtime at least in our case.


On Mon, Jul 6, 2020, 1:26 PM Durity, Sean R 
wrote:

> I plan downtime for changes to security settings like this. I could not
> come up with a way to not have degraded access or inconsistent data or
> something else bad. The foundational issue is that unencrypted nodes cannot
> communicate with encrypted ones.
>
>
>
> I depend on Cassandra’s high availability for many things, but I always
> caution my teams that security-related changes will usually require an
> outage. When I can have an outage window, this kind of change is very quick.
>
>
>
> Sean Durity
>
>
>
> *From:* Egan Neuhengen 
> *Sent:* Monday, July 6, 2020 12:50 PM
> *To:* user@cassandra.apache.org
> *Subject:* [EXTERNAL] Safely enabling internode TLS encryption on live
> cassandra cluster
>
>
>
> Hello,
>
>
>
> We are trying to come up with a safe way to turn on internode (NOT
> client-server) TLS encryption on a cassandra cluster with two datacenters,
> anywhere from 3 to 20 nodes in each DC, 3+ racks in each DC. Cassandra
> version is 3.11.6, OS is CentOS 7. We have full control over cassandra
> configuration and operation, and a decent amount of control over client
> driver configuration. We're looking for a way to enable internode TLS with
> no period of time in which clients cannot connect to the cluster or clients
> can connect but receive inconsistent or incorrect data results.
>
>
>
> Our understanding is that in 3.11, cassandra internode TLS encryption
> configuration (server_encryption_options::internode_encryption) can be set
> to none, all, dc, or rack, and "none" means the node will only send and
> receive unencrypted data, any other involves varying scope of only sending
> and receiving encrypted data; an "optional" setting only appears in the
> unreleased 4.0. The problem we run into is that no matter which scope we
> use, we end up with a period of time in which two different parts of the
> cluster won't be able to talk to each other, and so clients might get
> different answers depending on which part they talk to. In this scenario,
> clients can be shifted to talk to only one DC for a limited time, but
> cannot transition directly from only communicating with one DC to only
> communicating to the other; some period of time must be spent communicating
> to both, however small, between those two states.
>
>
>
> Is there a way to do this while avoiding downtime and wrong-answer
> problems?
>
> --
>
> The information in this Internet Email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this Email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful. When addressed
> to our clients any opinions or advice contained in this Email are subject
> to the terms and conditions expressed in any applicable governing The Home
> Depot terms of business or client engagement letter. The Home Depot
> disclaims all responsibility and liability for the accuracy and content of
> this attachment and for any damages or losses arising from any
> inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other
> items of a destructive nature, which may be contained in this attachment
> and shall not be liable for direct, indirect, consequential or special
> damages in connection with this e-mail message or its attachment.
>


Re: Question on cass-operator

2020-07-03 Thread vishal kharjul
Hello Manu,

It's actually a K8 query and not Cassnadra. AFIK READY= 2/2 could
represent a status of individual. Container in each pod. 2/2 suggests Pod
consists of two containers and both ready. Try "kubectl describe" on each
pod and you can see container spec. Also I will recommend getting started
kubernetes tutorial on kubernetes.io to refresh kubernetes concepts.


On Fri, Jul 3, 2020, 5:16 AM Manu Chadha  wrote:

> Hi
>
>
>
> I have a 3 node Kubernetes cluster and I have set up Cassandra on it using
> Cass-Operator.
>
>
>
> What does the 2/2 mean in the output of the following command
>
>
>
> kubectl get all -n cass-operator
>
> NAMEREADY   STATUSRESTARTS   AGE
>
> pod/cass-operator-78c6469c6-6qhsb   1/1 Running   0  139m
>
> pod/cluster1-dc1-default-sts-0  2/2 Running   0  138m
>
> pod/cluster1-dc1-default-sts-1  2/2 Running   0  138m
>
> pod/cluster1-dc1-default-sts-2  2/2 Running   0  138m
>
>
>
> Does it mean that there are 3 data centres each running 2 cassandra nodes?
> It should be because my K8S cluster has only 3 nodes.
>
>
>
> manuchadha25@cloudshell:~ (copper-frame-262317)$ gcloud compute instances list
>
> NAME  ZONE
> MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IPSTATUS
>
> gke-cassandra-cluster-default-pool-92d544da-6fq8  europe-west4-a  
> n1-standard-1   10.164.0.26  34.91.214.233  RUNNING
>
> gke-cassandra-cluster-default-pool-92d544da-g0b5  europe-west4-a  
> n1-standard-1   10.164.0.25  34.91.101.218  RUNNING
>
> gke-cassandra-cluster-default-pool-92d544da-l87v  europe-west4-a  
> n1-standard-1   10.164.0.27  34.91.86.10RUNNING
>
>
>
> Or is Cassandra-operator running two containers per K8S Node?
>
>
>
> thanks
>
> Manu
>
>
>


Re: What is the way to scale down Cassandra/Kubernetes cluster from 3 to 1 nodes using cass-operator

2020-07-02 Thread vishal kharjul
Hello Many,

I tried scale up and it's just need size parameter change . So try same for
scale down. Just change the size parameter of CassandraDatacenter CRD and
apply it again. Basically same step which you took to spinoff 3 node with
just the size parameter changed. Operator will bring down Cassandra nodes
accordingly.  No need to shut down or restart.

Thanks and Regards,
Vishal

On Thu, Jul 2, 2020, 3:41 AM Oleksandr Shulgin 
wrote:

> On Thu, Jul 2, 2020 at 9:29 AM Manu Chadha 
> wrote:
>
>> Thanks Alex. Will give this a try. So I just change the yaml file and
>> hot-patch it or would I need to stop the cluster, delete it and make a new
>> one?
>>
>
> I've no experience with this specific operator, but I expect that editing
> the file and applying it using kubectl is the way to go, especially if you
> don't want to lose your data.
>
> --
> Alex
>
>