Re: Phantom growth resulting automatically node shutdown

2018-04-23 Thread Fernando Neves
Thank you all guys!
We will plan to upgrade our cluster to the latest 3.11.x version.

2018-04-20 7:09 GMT+08:00 kurt greaves :

> This was fixed (again) in 3.0.15. https://issues.apache.
> org/jira/browse/CASSANDRA-13738
>
> On Fri., 20 Apr. 2018, 00:53 Jeff Jirsa,  wrote:
>
>> There have also been a few sstable ref counting bugs that would over
>> report load in nodetool ring/status due to overlapping normal and
>> incremental repairs (which you should probably avoid doing anyway)
>>
>> --
>> Jeff Jirsa
>>
>>
>> On Apr 19, 2018, at 9:27 AM, Rahul Singh 
>> wrote:
>>
>> I’ve seen something similar in 2.1. Our issue was related to file
>> permissions being flipped due to an automation and C* stopped seeing
>> Sstables so it started making new data — via read repair or repair
>> processes.
>>
>> In your case if nodetool is reporting data that means that it’s growing
>> due to data growth. What does your cfstats / tablestats day? Are you
>> monitoring your key tables data via cfstats metrics like SpaceUsedLive or
>> SpaceUsedTotal. What is your snapshottjng / backup process doing?
>>
>> --
>> Rahul Singh
>> rahul.si...@anant.us
>>
>> Anant Corporation
>>
>> On Apr 19, 2018, 7:01 AM -0500, horschi , wrote:
>>
>> Did you check the number of files in your data folder before & after the
>> restart?
>>
>> I have seen cases where cassandra would keep creating sstables, which
>> disappeared on restart.
>>
>> regards,
>> Christian
>>
>>
>> On Thu, Apr 19, 2018 at 12:18 PM, Fernando Neves <
>> fernando1ne...@gmail.com> wrote:
>>
>>> I am facing one issue with our Cassandra cluster.
>>>
>>> Details: Cassandra 3.0.14, 12 nodes, 7.4TB(JBOD) disk size in each node,
>>> ~3.5TB used physical data in each node, ~42TB whole cluster and default
>>> compaction setup. This size maintain the same because after the retention
>>> period some tables are dropped.
>>>
>>> Issue: Nodetool status is not showing the correct used size in the
>>> output. It keeps increasing the used size without limit until automatically
>>> node shutdown or until our sequential scheduled restart(workaround 3 times
>>> week). After the restart, nodetool shows the correct used space but for few
>>> days.
>>> Did anybody have similar problem? Is it a bug?
>>>
>>> Stackoverflow: https://stackoverflow.com/questions/49668692/cassandra-
>>> nodetool-status-is-not-showing-correct-used-space
>>>
>>>
>>


Re: Phantom growth resulting automatically node shutdown

2018-04-19 Thread kurt greaves
This was fixed (again) in 3.0.15.
https://issues.apache.org/jira/browse/CASSANDRA-13738

On Fri., 20 Apr. 2018, 00:53 Jeff Jirsa,  wrote:

> There have also been a few sstable ref counting bugs that would over
> report load in nodetool ring/status due to overlapping normal and
> incremental repairs (which you should probably avoid doing anyway)
>
> --
> Jeff Jirsa
>
>
> On Apr 19, 2018, at 9:27 AM, Rahul Singh 
> wrote:
>
> I’ve seen something similar in 2.1. Our issue was related to file
> permissions being flipped due to an automation and C* stopped seeing
> Sstables so it started making new data — via read repair or repair
> processes.
>
> In your case if nodetool is reporting data that means that it’s growing
> due to data growth. What does your cfstats / tablestats day? Are you
> monitoring your key tables data via cfstats metrics like SpaceUsedLive or
> SpaceUsedTotal. What is your snapshottjng / backup process doing?
>
> --
> Rahul Singh
> rahul.si...@anant.us
>
> Anant Corporation
>
> On Apr 19, 2018, 7:01 AM -0500, horschi , wrote:
>
> Did you check the number of files in your data folder before & after the
> restart?
>
> I have seen cases where cassandra would keep creating sstables, which
> disappeared on restart.
>
> regards,
> Christian
>
>
> On Thu, Apr 19, 2018 at 12:18 PM, Fernando Neves  > wrote:
>
>> I am facing one issue with our Cassandra cluster.
>>
>> Details: Cassandra 3.0.14, 12 nodes, 7.4TB(JBOD) disk size in each node,
>> ~3.5TB used physical data in each node, ~42TB whole cluster and default
>> compaction setup. This size maintain the same because after the retention
>> period some tables are dropped.
>>
>> Issue: Nodetool status is not showing the correct used size in the
>> output. It keeps increasing the used size without limit until automatically
>> node shutdown or until our sequential scheduled restart(workaround 3 times
>> week). After the restart, nodetool shows the correct used space but for few
>> days.
>> Did anybody have similar problem? Is it a bug?
>>
>> Stackoverflow:
>> https://stackoverflow.com/questions/49668692/cassandra-nodetool-status-is-not-showing-correct-used-space
>>
>>
>


Re: Phantom growth resulting automatically node shutdown

2018-04-19 Thread Jeff Jirsa
There have also been a few sstable ref counting bugs that would over report 
load in nodetool ring/status due to overlapping normal and incremental repairs 
(which you should probably avoid doing anyway)

-- 
Jeff Jirsa


> On Apr 19, 2018, at 9:27 AM, Rahul Singh  wrote:
> 
> I’ve seen something similar in 2.1. Our issue was related to file permissions 
> being flipped due to an automation and C* stopped seeing Sstables so it 
> started making new data — via read repair or repair processes.
> 
> In your case if nodetool is reporting data that means that it’s growing due 
> to data growth. What does your cfstats / tablestats day? Are you monitoring 
> your key tables data via cfstats metrics like SpaceUsedLive or 
> SpaceUsedTotal. What is your snapshottjng / backup process doing?
> 
> --
> Rahul Singh
> rahul.si...@anant.us
> 
> Anant Corporation
> 
>> On Apr 19, 2018, 7:01 AM -0500, horschi , wrote:
>> Did you check the number of files in your data folder before & after the 
>> restart?
>> 
>> I have seen cases where cassandra would keep creating sstables, which 
>> disappeared on restart.
>> 
>> regards,
>> Christian
>> 
>> 
>>> On Thu, Apr 19, 2018 at 12:18 PM, Fernando Neves  
>>> wrote:
>>> I am facing one issue with our Cassandra cluster.
>>> 
>>> Details: Cassandra 3.0.14, 12 nodes, 7.4TB(JBOD) disk size in each node, 
>>> ~3.5TB used physical data in each node, ~42TB whole cluster and default 
>>> compaction setup. This size maintain the same because after the retention 
>>> period some tables are dropped.
>>> 
>>> Issue: Nodetool status is not showing the correct used size in the output. 
>>> It keeps increasing the used size without limit until automatically node 
>>> shutdown or until our sequential scheduled restart(workaround 3 times 
>>> week). After the restart, nodetool shows the correct used space but for few 
>>> days.
>>> Did anybody have similar problem? Is it a bug?
>>> 
>>> Stackoverflow: 
>>> https://stackoverflow.com/questions/49668692/cassandra-nodetool-status-is-not-showing-correct-used-space
>>> 
>> 


Re: Phantom growth resulting automatically node shutdown

2018-04-19 Thread Rahul Singh
I’ve seen something similar in 2.1. Our issue was related to file permissions 
being flipped due to an automation and C* stopped seeing Sstables so it started 
making new data — via read repair or repair processes.

In your case if nodetool is reporting data that means that it’s growing due to 
data growth. What does your cfstats / tablestats day? Are you monitoring your 
key tables data via cfstats metrics like SpaceUsedLive or SpaceUsedTotal. What 
is your snapshottjng / backup process doing?

--
Rahul Singh
rahul.si...@anant.us

Anant Corporation

On Apr 19, 2018, 7:01 AM -0500, horschi , wrote:
> Did you check the number of files in your data folder before & after the 
> restart?
>
> I have seen cases where cassandra would keep creating sstables, which 
> disappeared on restart.
>
> regards,
> Christian
>
>
> > On Thu, Apr 19, 2018 at 12:18 PM, Fernando Neves  
> > wrote:
> > > > I am facing one issue with our Cassandra cluster.
> > > >
> > > > Details: Cassandra 3.0.14, 12 nodes, 7.4TB(JBOD) disk size in each 
> > > > node, ~3.5TB used physical data in each node, ~42TB whole cluster and 
> > > > default compaction setup. This size maintain the same because after the 
> > > > retention period some tables are dropped.
> > > >
> > > > Issue: Nodetool status is not showing the correct used size in the 
> > > > output. It keeps increasing the used size without limit until 
> > > > automatically node shutdown or until our sequential scheduled 
> > > > restart(workaround 3 times week). After the restart, nodetool shows the 
> > > > correct used space but for few days.
> > > > Did anybody have similar problem? Is it a bug?
> > > >
> > > > Stackoverflow: 
> > > > https://stackoverflow.com/questions/49668692/cassandra-nodetool-status-is-not-showing-correct-used-space
> > >
>


Re: Phantom growth resulting automatically node shutdown

2018-04-19 Thread horschi
Did you check the number of files in your data folder before & after the
restart?

I have seen cases where cassandra would keep creating sstables, which
disappeared on restart.

regards,
Christian


On Thu, Apr 19, 2018 at 12:18 PM, Fernando Neves 
wrote:

> I am facing one issue with our Cassandra cluster.
>
> Details: Cassandra 3.0.14, 12 nodes, 7.4TB(JBOD) disk size in each node,
> ~3.5TB used physical data in each node, ~42TB whole cluster and default
> compaction setup. This size maintain the same because after the retention
> period some tables are dropped.
>
> Issue: Nodetool status is not showing the correct used size in the output.
> It keeps increasing the used size without limit until automatically node
> shutdown or until our sequential scheduled restart(workaround 3 times
> week). After the restart, nodetool shows the correct used space but for few
> days.
> Did anybody have similar problem? Is it a bug?
>
> Stackoverflow: https://stackoverflow.com/ques
> tions/49668692/cassandra-nodetool-status-is-not-showing-correct-used-space
>
>


Phantom growth resulting automatically node shutdown

2018-04-19 Thread Fernando Neves
I am facing one issue with our Cassandra cluster.

Details: Cassandra 3.0.14, 12 nodes, 7.4TB(JBOD) disk size in each node,
~3.5TB used physical data in each node, ~42TB whole cluster and default
compaction setup. This size maintain the same because after the retention
period some tables are dropped.

Issue: Nodetool status is not showing the correct used size in the output.
It keeps increasing the used size without limit until automatically node
shutdown or until our sequential scheduled restart(workaround 3 times
week). After the restart, nodetool shows the correct used space but for few
days.
Did anybody have similar problem? Is it a bug?

Stackoverflow: https://stackoverflow.com/questions/49668692/cassandra-
nodetool-status-is-not-showing-correct-used-space