how to change a write's and a read's consistency level separately in cqlsh?

2019-06-28 Thread Nimbus Lin
To Sir Oleksandr : Thank you very much for your careful teaching, at the begging, I copied system_auth keyspace and tables' sql grammar and misunderstood the HA function of cassandra, now I know cassandra'ha as hadoop or greenplum. And I will check the 3rd answer on Jconsole

Re: How can I check cassandra cluster has a real working function of high availability?

2019-06-28 Thread Nimbus Lin
To Sir Oleksandr : Thank you! Sincerely Nimbuslin(Lin JiaXin) Mobile: 0086 180 5986 1565 Mail: jiaxin...@live.com From: Oleksandr Shulgin Sent: Monday, June 17, 2019 7:19 AM To: User Subject: Re: How can I check cassandra cluster has a real working

Re: Running Node Repair After Changing RF or Replication Strategy for a Keyspace

2019-06-28 Thread Jeff Jirsa
If you’re at RF= 3 and read/write at quorum, you’ll have full visibility of all data if you switch to RF=4 and continue reading at quorum because quorum if 4 is 3, so you’re guaranteed to overlap with at least one of the two nodes that got all earlier writes Going from 3 to 4 to 5 requires a

Re: Running Node Repair After Changing RF or Replication Strategy for a Keyspace

2019-06-28 Thread Oleksandr Shulgin
On Fri, Jun 28, 2019 at 11:29 PM Jeff Jirsa wrote: > you often have to run repair after each increment - going from 3 -> 5 > means 3 -> 4, repair, 4 -> 5 - just going 3 -> 5 will violate consistency > guarantees, and is technically unsafe. > Jeff, How going from 3 -> 4 is *not violating*

Re: Securing cluster communication

2019-06-28 Thread Oleksandr Shulgin
On Fri, Jun 28, 2019 at 3:57 PM Marc Richter wrote: > > How is this dealt with in Cassandra? Is setting up firewalls the only > way to allow only some nodes to connect to the ports 7000/7001? > Hi, You can set server_encryption_options: internode_encryption: all ... and distribute

Re: Running Node Repair After Changing RF or Replication Strategy for a Keyspace

2019-06-28 Thread Jon Haddad
Yep - not to mention the increased complexity and overhead of going from ONE to QUORUM, or the increased cost of QUORUM in RF=5 vs RF=3. If you're in a cloud provider, I've found you're almost always better off adding a new DC with a higher RF, assuming you're on NTS like Jeff mentioned. On Fri,

Re: Running Node Repair After Changing RF or Replication Strategy for a Keyspace

2019-06-28 Thread Jeff Jirsa
For just changing RF: You only need to repair the full token range - how you do that is up to you. Running `repair -pr -full` on each node will do that. Running `repair -full` will do it multiple times, so it's more work, but technically correct.The caveat that few people actually appreciate

RE: [EXTERNAL] Re: Bursts of Thrift threads make cluster unresponsive

2019-06-28 Thread Durity, Sean R
This sounds like a bad query or large partition. If a large partition is requested on multiple nodes (because of consistency level), it will pressure all those replica nodes. Then, as the cluster tries to adjust the rest of the load, the other nodes can get overwhelmed, too. Look at cfstats to

Running Node Repair After Changing RF or Replication Strategy for a Keyspace

2019-06-28 Thread Fd Habash
Hi all … The datastax & apache docs are clear: run ‘nodetool repair’ after you alter a keyspace to change its RF or RS. However, the details are all over the place as what type of repair and on what nodes it needs to run. None of the above doc authorities are clear and what you find on the

Re: Securing cluster communication

2019-06-28 Thread Hannu Kröger
I would start checking this page: http://cassandra.apache.org/doc/latest/operating/security.html Then move to this: https://thelastpickle.com/blog/2015/09/30/hardening-cassandra-step-by-step-part-1-server-to-server.html Cheers, Hannu > Marc Richter kirjoitti 28.6.2019 kello 16.55: > > Hi

Securing cluster communication

2019-06-28 Thread Marc Richter
Hi everyone, I'm completely new to Cassandra DB, so please do not roast me for asking obvious stuff. I managed to setup one Cassandra node and enter some data to it, successfully. Next, I installed a second node, which connects to that first one via port 7000 and sync all that data from it.

Re: Restore from EBS onto different cluster

2019-06-28 Thread Oleksandr Shulgin
On Fri, Jun 28, 2019 at 8:37 AM Ayub M wrote: > Hello, I have a cluster with 3 nodes - say cluster1 on AWS EC2 instances. > The cluster is up and running, took snapshot of the keyspaces volume. > > Now I want to restore few tables/keyspaces from the snapshot volumes, so I > created another

Re: Restore from EBS onto different cluster

2019-06-28 Thread Rhys Campbell
Sstableloader is probably your best option Ayub M schrieb am Fr., 28. Juni 2019, 08:37: > Hello, I have a cluster with 3 nodes - say cluster1 on AWS EC2 instances. > The cluster is up and running, took snapshot of the keyspaces volume. > > Now I want to restore few tables/keyspaces from the

Restore from EBS onto different cluster

2019-06-28 Thread Ayub M
Hello, I have a cluster with 3 nodes - say cluster1 on AWS EC2 instances. The cluster is up and running, took snapshot of the keyspaces volume. Now I want to restore few tables/keyspaces from the snapshot volumes, so I created another cluster say cluster2 and attached the snapshot volumes on to

Re: Ec2MultiRegionSnitch difficulties (3.11.2)

2019-06-28 Thread Oleksandr Shulgin
On Fri, Jun 28, 2019 at 3:14 AM Voytek Jarnot wrote: > Curious if anyone could shed some light on this. Trying to set up a > 4-node, one DC (for now, same region, same AZ, same VPC, etc) cluster in > AWS. > > All nodes have the following config (everything else basically standard): >