Re: Thrift to CQL migration under new Keyspace or Cluster

2018-06-26 Thread Fernando Neves
For while only in local machine, but we will do it in test environment.
Thanks.

2018-06-25 16:15 GMT+08:00 dinesh.jo...@yahoo.com.INVALID <
dinesh.jo...@yahoo.com.invalid>:

> If you're working in a different keyspace, I don't anticipate any issues.
> Have you attempted one in a test cluster? :)
>
> Dinesh
>
>
> On Friday, June 22, 2018, 1:26:56 AM PDT, Fernando Neves <
> fernando1ne...@gmail.com> wrote:
>
>
> Hi guys,
> We are running one of our Cassandra cluster under 2.0.17 Thrift version
> and we started the 2.0.17 CQL migration plan through
> CQLSSTableWriter/sstableloader method.
>
> Simple question, maybe someone worked in similar scenario, is there any
> problem to do the migration under the same Cassandra instances (nodes) but
> in different keyspace (ks_thrift to ks_cql) or should we create another
> 2.0.17 cluster to do this work?
> I know that new keyspace will require more host resources but it will be
> more simple for us, because once the table migrated we will drop it on the
> old ks_thrift keyspace.
>
> Thanks,
> Fernando.
>


Thrift to CQL migration under new Keyspace or Cluster

2018-06-22 Thread Fernando Neves
Hi guys,
We are running one of our Cassandra cluster under 2.0.17 Thrift version and
we started the 2.0.17 CQL migration plan through
CQLSSTableWriter/sstableloader method.

Simple question, maybe someone worked in similar scenario, is there any
problem to do the migration under the same Cassandra instances (nodes) but
in different keyspace (ks_thrift to ks_cql) or should we create another
2.0.17 cluster to do this work?
I know that new keyspace will require more host resources but it will be
more simple for us, because once the table migrated we will drop it on the
old ks_thrift keyspace.

Thanks,
Fernando.


Re: Phantom growth resulting automatically node shutdown

2018-04-23 Thread Fernando Neves
Thank you all guys!
We will plan to upgrade our cluster to the latest 3.11.x version.

2018-04-20 7:09 GMT+08:00 kurt greaves <k...@instaclustr.com>:

> This was fixed (again) in 3.0.15. https://issues.apache.
> org/jira/browse/CASSANDRA-13738
>
> On Fri., 20 Apr. 2018, 00:53 Jeff Jirsa, <jji...@gmail.com> wrote:
>
>> There have also been a few sstable ref counting bugs that would over
>> report load in nodetool ring/status due to overlapping normal and
>> incremental repairs (which you should probably avoid doing anyway)
>>
>> --
>> Jeff Jirsa
>>
>>
>> On Apr 19, 2018, at 9:27 AM, Rahul Singh <rahul.xavier.si...@gmail.com>
>> wrote:
>>
>> I’ve seen something similar in 2.1. Our issue was related to file
>> permissions being flipped due to an automation and C* stopped seeing
>> Sstables so it started making new data — via read repair or repair
>> processes.
>>
>> In your case if nodetool is reporting data that means that it’s growing
>> due to data growth. What does your cfstats / tablestats day? Are you
>> monitoring your key tables data via cfstats metrics like SpaceUsedLive or
>> SpaceUsedTotal. What is your snapshottjng / backup process doing?
>>
>> --
>> Rahul Singh
>> rahul.si...@anant.us
>>
>> Anant Corporation
>>
>> On Apr 19, 2018, 7:01 AM -0500, horschi <hors...@gmail.com>, wrote:
>>
>> Did you check the number of files in your data folder before & after the
>> restart?
>>
>> I have seen cases where cassandra would keep creating sstables, which
>> disappeared on restart.
>>
>> regards,
>> Christian
>>
>>
>> On Thu, Apr 19, 2018 at 12:18 PM, Fernando Neves <
>> fernando1ne...@gmail.com> wrote:
>>
>>> I am facing one issue with our Cassandra cluster.
>>>
>>> Details: Cassandra 3.0.14, 12 nodes, 7.4TB(JBOD) disk size in each node,
>>> ~3.5TB used physical data in each node, ~42TB whole cluster and default
>>> compaction setup. This size maintain the same because after the retention
>>> period some tables are dropped.
>>>
>>> Issue: Nodetool status is not showing the correct used size in the
>>> output. It keeps increasing the used size without limit until automatically
>>> node shutdown or until our sequential scheduled restart(workaround 3 times
>>> week). After the restart, nodetool shows the correct used space but for few
>>> days.
>>> Did anybody have similar problem? Is it a bug?
>>>
>>> Stackoverflow: https://stackoverflow.com/questions/49668692/cassandra-
>>> nodetool-status-is-not-showing-correct-used-space
>>>
>>>
>>


Phantom growth resulting automatically node shutdown

2018-04-19 Thread Fernando Neves
I am facing one issue with our Cassandra cluster.

Details: Cassandra 3.0.14, 12 nodes, 7.4TB(JBOD) disk size in each node,
~3.5TB used physical data in each node, ~42TB whole cluster and default
compaction setup. This size maintain the same because after the retention
period some tables are dropped.

Issue: Nodetool status is not showing the correct used size in the output.
It keeps increasing the used size without limit until automatically node
shutdown or until our sequential scheduled restart(workaround 3 times
week). After the restart, nodetool shows the correct used space but for few
days.
Did anybody have similar problem? Is it a bug?

Stackoverflow: https://stackoverflow.com/questions/49668692/cassandra-
nodetool-status-is-not-showing-correct-used-space