Re: Read Latency Doubles After Shrinking Cluster and Never Recovers

2018-06-11 Thread Nicolas Guyomar
a C* process reboot at least around 2.2.8. Is > this true? > > > > > > > Thank you > > > > *From: *Nitan Kainth > *Sent: *Monday, June 11, 2018 10:40 AM > *To: *user@cassandra.apache.org > *Subject: *Re: Read Latency Doubles After Shrinking Clust

Re: Read Latency Doubles After Shrinking Cluster and Never Recovers

2018-06-11 Thread Jeff Jirsa
8. Is > this true? > > > > Thank you > > From: Nitan Kainth > Sent: Monday, June 11, 2018 10:40 AM > To: user@cassandra.apache.org > Subject: Re: Read Latency Doubles After Shrinking Cluster and Never Recovers > > I think it would because it Cassandra will proce

RE: Read Latency Doubles After Shrinking Cluster and Never Recovers

2018-06-11 Thread Fd Habash
Subject: Re: Read Latency Doubles After Shrinking Cluster and Never Recovers I think it would because it Cassandra will process more sstables to create response to read queries. Now after clean if the data volume is same and compaction has been running, I can’t think of any more diagnostic step

Re: Read Latency Doubles After Shrinking Cluster and Never Recovers

2018-06-11 Thread Nitan Kainth
d it have impacted read latency the > fact that some nodes still have sstables that they no longer need? > > Thanks > > > Thank you > > From: Nitan Kainth > Sent: Monday, June 11, 2018 10:18 AM > To: user@cassandra.apache.org > Subject: Re: Rea

RE: Read Latency Doubles After Shrinking Cluster and Never Recovers

2018-06-11 Thread Fd Habash
, June 11, 2018 10:18 AM To: user@cassandra.apache.org Subject: Re: Read Latency Doubles After Shrinking Cluster and Never Recovers Did you run cleanup too?  On Mon, Jun 11, 2018 at 10:16 AM, Fred Habash wrote: I have hit dead-ends every where I turned on this issue.  We had a 15-node cluster

Re: Read Latency Doubles After Shrinking Cluster and Never Recovers

2018-06-11 Thread Nitan Kainth
Did you run cleanup too? On Mon, Jun 11, 2018 at 10:16 AM, Fred Habash wrote: > I have hit dead-ends every where I turned on this issue. > > We had a 15-node cluster that was doing 35 ms all along for years. At > some point, we made a decision to shrink it to 13. Read latency rose to > near 70