Re: Phantom growth resulting automatically node shutdown

2018-04-23 Thread Fernando Neves
Thank you all guys! We will plan to upgrade our cluster to the latest 3.11.x version. 2018-04-20 7:09 GMT+08:00 kurt greaves : > This was fixed (again) in 3.0.15. https://issues.apache. > org/jira/browse/CASSANDRA-13738 > > On Fri., 20 Apr. 2018, 00:53 Jeff Jirsa,

Re: Phantom growth resulting automatically node shutdown

2018-04-19 Thread kurt greaves
This was fixed (again) in 3.0.15. https://issues.apache.org/jira/browse/CASSANDRA-13738 On Fri., 20 Apr. 2018, 00:53 Jeff Jirsa, wrote: > There have also been a few sstable ref counting bugs that would over > report load in nodetool ring/status due to overlapping normal and >

Re: Phantom growth resulting automatically node shutdown

2018-04-19 Thread Jeff Jirsa
There have also been a few sstable ref counting bugs that would over report load in nodetool ring/status due to overlapping normal and incremental repairs (which you should probably avoid doing anyway) -- Jeff Jirsa > On Apr 19, 2018, at 9:27 AM, Rahul Singh

Re: Phantom growth resulting automatically node shutdown

2018-04-19 Thread Rahul Singh
I’ve seen something similar in 2.1. Our issue was related to file permissions being flipped due to an automation and C* stopped seeing Sstables so it started making new data — via read repair or repair processes. In your case if nodetool is reporting data that means that it’s growing due to

Re: Phantom growth resulting automatically node shutdown

2018-04-19 Thread horschi
Did you check the number of files in your data folder before & after the restart? I have seen cases where cassandra would keep creating sstables, which disappeared on restart. regards, Christian On Thu, Apr 19, 2018 at 12:18 PM, Fernando Neves wrote: > I am facing

Phantom growth resulting automatically node shutdown

2018-04-19 Thread Fernando Neves
I am facing one issue with our Cassandra cluster. Details: Cassandra 3.0.14, 12 nodes, 7.4TB(JBOD) disk size in each node, ~3.5TB used physical data in each node, ~42TB whole cluster and default compaction setup. This size maintain the same because after the retention period some tables are