Re: Bootstrapping to Replace a Dead Node vs. Adding a New Node: Consistency Guarantees

2019-04-30 Thread Alok Dwivedi
When a new node joins the ring, it needs to own new token ranges. This should be unique to the new node and we don’t want to end up in a situation where two nodes joining simultaneously can own same range (and ideally evenly distributed). Cassandra has this 2 minute wait rule for gossip state

Bootstrapping to Replace a Dead Node vs. Adding a New Node: Consistency Guarantees

2019-04-30 Thread Fd Habash
Reviewing the documentation & based on my testing, using C* 2.2.8, I was not able to extend the cluster by adding multiple nodes simultaneously. I got an error message … Other bootstrapping/leaving/moving nodes detected, cannot bootstrap while cassandra.consistent.rangemovement is true I

Re: Increasing the size limits implications

2019-04-30 Thread Bobbie Haynes
We have a requirement to store blob data . Sent from my iPhone > On Apr 30, 2019, at 9:16 AM, Jon Haddad wrote: > > Just curious - why are you using such large batches? Most of the time > when someone asks this question, it's because they're using batches as > they would in an RDBMS, because

TWCS sstables not dropping even though all data is expired

2019-04-30 Thread Mike Torra
Hello - I have a 48 node C* cluster spread across 4 AWS regions with RF=3. A few months ago I started noticing disk usage on some nodes increasing consistently. At first I solved the problem by destroying the nodes and rebuilding them, but the problem returns. I did some more investigation

Re: Increasing the size limits implications

2019-04-30 Thread Jon Haddad
Just curious - why are you using such large batches? Most of the time when someone asks this question, it's because they're using batches as they would in an RDBMS, because larger transactions improve performance. That doesn't apply with Cassandra. Batches are OK at keeping multiple tables in

Re: Decommissioning a new node when the state is JOINING

2019-04-30 Thread Akshay Bhardwaj
Thank you for prompt replies. The solutions worked! Akshay Bhardwaj +91-97111-33849 On Tue, Apr 30, 2019 at 5:56 PM ZAIDI, ASAD A wrote: > Just stop the server/kill C* process as node never fully joined the > cluster yet – that should be enough. You can safely remove the data i.e. >

RE: Decommissioning a new node when the state is JOINING

2019-04-30 Thread ZAIDI, ASAD A
Just stop the server/kill C* process as node never fully joined the cluster yet – that should be enough. You can safely remove the data i.e. streamed-in on new node so you can use the node for other new cluster. From: Akshay Bhardwaj [mailto:akshay.bhardwaj1...@gmail.com] Sent: Tuesday, April

Re: Decommissioning a new node when the state is JOINING

2019-04-30 Thread shalom sagges
I would just stop the service of the joining node and then delete the data, commit logs and saved caches. After stopping the node while joining, the cluster will remove it from the list (i.e. nodetool status) without the need to decommission. On Tue, Apr 30, 2019 at 2:44 PM Akshay Bhardwaj <

Decommissioning a new node when the state is JOINING

2019-04-30 Thread Akshay Bhardwaj
Hi Experts, I have a cassandra cluster running with 5 nodes. For some reason, I was creating a new cassandra cluster, but one of the nodes intended for new cluster had the same cassandra.yml file as the existing cluster. This resulted in the new node joining the existing cluster, making total no.

Re: Backup Restore

2019-04-30 Thread Ivan Junckes Filho
Thanks guys! On Fri, Apr 26, 2019 at 1:17 PM Alain RODRIGUEZ wrote: > Hello Ivan, > > Is there a way I can do one command to backup and one to restore a backup? > > > > Handling backups and restore automatically is not an easy task to work on. > It's not straight forward. But it's doable and

Re: different query result after a rerun of the same query

2019-04-30 Thread Ben Slater
If you have succesfully run a repair between the initial insert and running the first select then that should have ensured that all replicas are there. Are you sure your repairs are completing successfully? To check if all replicas are not been written during the periods of high load you can

Re: different query result after a rerun of the same query

2019-04-30 Thread Marco Gasparini
> My guess is the initial query was causing a read repair so, on subsequent queries, there were replicas of the data on every node and it still got returned at consistency one got it >There are a number of ways the data could have become inconsistent in the first place - eg badly overloaded or

Re: cassandra node was put down with oom error

2019-04-30 Thread yeomii999
Hello, I'm suffering from similar problem with OSS cassandra version3.11.3. My cassandra cluster have been running for longer than 1 years and there was no problem until this year. The cluster is write-intensive, consists of 70 nodes, and all rows have 2 hr TTL. The only change is the read