Re: Upgrade From 2.0 to 2.1

2019-02-11 Thread shalom sagges
Very soon. If not today, it will be up tomorrow. :)
Yayyy, just saw the release of 3.11.4.  :-)

You'll need to go to v3 for 3.11. Congratulations on being aware enough to
do this - advanced upgrade coordination, it's absolutely the right thing to
do, but most people don't know it's possible or useful.
Thanks a lot Jeff for clarifying this.
I really hoped the answer would be different. Now I need to nag our R
teams again :-)

Thanks!

On Mon, Feb 11, 2019 at 8:21 PM Michael Shuler 
wrote:

> On 2/11/19 9:24 AM, shalom sagges wrote:
> > I've successfully upgraded a 2.0 cluster to 2.1 on the way to upgrade to
> > 3.11 (hopefully 3.11.4 if it'd be released very soon).
>
> Very soon. If not today, it will be up tomorrow. :)
>
> --
> Michael
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


[RELEASE] Apache Cassandra 2.1.21 released

2019-02-11 Thread Michael Shuler
The Cassandra team is pleased to announce the release of Apache
Cassandra version 2.1.21.

Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.

 http://cassandra.apache.org/

Downloads of source and binary distributions are listed in our download
section:

 http://cassandra.apache.org/download/

This version is a bug fix release[1] on the 2.1 series. As always,
please pay attention to the release notes[2] and Let us know[3] if you
were to encounter any problem.

Enjoy!

[1]: (CHANGES.txt)
https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-2.1.21
[2]: (NEWS.txt)
https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=NEWS.txt;hb=refs/tags/cassandra-2.1.21
[3]: https://issues.apache.org/jira/browse/CASSANDRA

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



[RELEASE] Apache Cassandra 2.2.14 released

2019-02-11 Thread Michael Shuler
The Cassandra team is pleased to announce the release of Apache
Cassandra version 2.2.14.

Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.

 http://cassandra.apache.org/

Downloads of source and binary distributions are listed in our download
section:

 http://cassandra.apache.org/download/

This version is a bug fix release[1] on the 2.2 series. As always,
please pay attention to the release notes[2] and Let us know[3] if you
were to encounter any problem.

Enjoy!

[1]: (CHANGES.txt)
https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-2.2.14
[2]: (NEWS.txt)
https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=NEWS.txt;hb=refs/tags/cassandra-2.2.14
[3]: https://issues.apache.org/jira/browse/CASSANDRA

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



[RELEASE] Apache Cassandra 3.0.18 released

2019-02-11 Thread Michael Shuler
The Cassandra team is pleased to announce the release of Apache
Cassandra version 3.0.18.

Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.

 http://cassandra.apache.org/

Downloads of source and binary distributions are listed in our download
section:

 http://cassandra.apache.org/download/

This version is a bug fix release[1] on the 3.0 series. As always,
please pay attention to the release notes[2] and Let us know[3] if you
were to encounter any problem.

Enjoy!

[1]: (CHANGES.txt)
https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-3.0.18
[2]: (NEWS.txt)
https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=NEWS.txt;hb=refs/tags/cassandra-3.0.18
[3]: https://issues.apache.org/jira/browse/CASSANDRA

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



[RELEASE] Apache Cassandra 3.11.4 released

2019-02-11 Thread Michael Shuler
The Cassandra team is pleased to announce the release of Apache
Cassandra version 3.11.4.

Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.

 http://cassandra.apache.org/

Downloads of source and binary distributions are listed in our download
section:

 http://cassandra.apache.org/download/

This version is a bug fix release[1] on the 3.11 series. As always,
please pay attention to the release notes[2] and Let us know[3] if you
were to encounter any problem.

Enjoy!

[1]: (CHANGES.txt)
https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-3.11.4
[2]: (NEWS.txt)
https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=NEWS.txt;hb=refs/tags/cassandra-3.11.4
[3]: https://issues.apache.org/jira/browse/CASSANDRA

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Max number of windows when using TWCS

2019-02-11 Thread Osman YOZGATLIOĞLU
Hello,

By the way, about https://issues.apache.org/jira/browse/CASSANDRA-13418, I'm 
not sure how to apply this solution.

Do you have a guide about it?


Regards,

Osman


On 12.02.2019 01:42, Nitan Kainth wrote:
That’s right Jeff. That’s why I am thinking why not compaction gets rid of old 
exited sstables?


Regards,
Nitan
Cell: 510 449 9629

On Feb 11, 2019, at 3:53 PM, Jeff Jirsa 
mailto:jji...@gmail.com>> wrote:

It's probably not safe. You shouldn't touch the underlying sstables unless 
you're very sure you know what you're doing.


On Mon, Feb 11, 2019 at 1:05 PM Akash Gangil 
mailto:akashg1...@gmail.com>> wrote:
I have in the past tried to delete SSTables manually, but have noticed bits and 
pieces of that data still remain, even though the sstables of that window is 
deleted. So always wondered if playing directly with the underlying filesystem 
is a safe bet?


On Mon, Feb 11, 2019 at 1:01 PM Jonathan Haddad 
mailto:j...@jonhaddad.com>> wrote:
Deleting SSTables manually can be useful if you don't know your TTL up front.  
For example, you have an ETL process that moves your raw Cassandra data into S3 
as parquet files, and you want to be sure that process is completed before you 
delete the data.  You could also start out without setting a TTL and later 
realize you need one.  This is a remarkably common problem.

On Mon, Feb 11, 2019 at 12:51 PM Nitan Kainth 
mailto:nitankai...@gmail.com>> wrote:
Jeff,

It means we have to delete sstables manually?


Regards,
Nitan
Cell: 510 449 9629

On Feb 11, 2019, at 2:40 PM, Jeff Jirsa 
mailto:jji...@gmail.com>> wrote:

There's a bit of headache around overlapping sstables being strictly safe to 
delete.  https://issues.apache.org/jira/browse/CASSANDRA-13418 was added to 
allow the "I know it's not technically safe, but just delete it anyway" use 
case. For a lot of people who started using TWCS before 13418, "stop cassandra, 
remove stuff we know is expired, start cassandra" is a not-uncommon pattern in 
very high-write, high-disk-space use cases.



On Mon, Feb 11, 2019 at 12:34 PM Nitan Kainth 
mailto:nitankai...@gmail.com>> wrote:
Hi,
In regards to comment “Purging data is also straightforward, just dropping 
SSTables (by a script) where create date is older than a threshold, we don't 
even need to rely on TTL”

Doesn’t the old sstables drop by itself? One ttl and gc grace seconds past 
whole sstable will have only tombstones.


Regards,
Nitan
Cell: 510 449 9629

On Feb 11, 2019, at 2:23 PM, DuyHai Doan 
mailto:doanduy...@gmail.com>> wrote:

Purging data is also straightforward, just dropping SSTables (by a script) 
where create date is older than a threshold, we don't even need to rely on TTL


--
Jon Haddad
http://www.rustyrazorblade.com
twitter: rustyrazorblade


--
Akash


Re: Max number of windows when using TWCS

2019-02-11 Thread Nitan Kainth
That’s right Jeff. That’s why I am thinking why not compaction gets rid of old 
exited sstables?


Regards,
Nitan
Cell: 510 449 9629

> On Feb 11, 2019, at 3:53 PM, Jeff Jirsa  wrote:
> 
> It's probably not safe. You shouldn't touch the underlying sstables unless 
> you're very sure you know what you're doing.
> 
> 
>> On Mon, Feb 11, 2019 at 1:05 PM Akash Gangil  wrote:
>> I have in the past tried to delete SSTables manually, but have noticed bits 
>> and pieces of that data still remain, even though the sstables of that 
>> window is deleted. So always wondered if playing directly with the 
>> underlying filesystem is a safe bet?
>> 
>> 
>>> On Mon, Feb 11, 2019 at 1:01 PM Jonathan Haddad  wrote:
>>> Deleting SSTables manually can be useful if you don't know your TTL up 
>>> front.  For example, you have an ETL process that moves your raw Cassandra 
>>> data into S3 as parquet files, and you want to be sure that process is 
>>> completed before you delete the data.  You could also start out without 
>>> setting a TTL and later realize you need one.  This is a remarkably common 
>>> problem.
>>> 
 On Mon, Feb 11, 2019 at 12:51 PM Nitan Kainth  
 wrote:
 Jeff,
 
 It means we have to delete sstables manually?
 
 
 Regards,
 Nitan
 Cell: 510 449 9629
 
> On Feb 11, 2019, at 2:40 PM, Jeff Jirsa  wrote:
> 
> There's a bit of headache around overlapping sstables being strictly safe 
> to delete.  https://issues.apache.org/jira/browse/CASSANDRA-13418 was 
> added to allow the "I know it's not technically safe, but just delete it 
> anyway" use case. For a lot of people who started using TWCS before 
> 13418, "stop cassandra, remove stuff we know is expired, start cassandra" 
> is a not-uncommon pattern in very high-write, high-disk-space use cases. 
> 
> 
> 
>> On Mon, Feb 11, 2019 at 12:34 PM Nitan Kainth  
>> wrote:
>> Hi,
>> In regards to comment “Purging data is also straightforward, just 
>> dropping SSTables (by a script) where create date is older than a 
>> threshold, we don't even need to rely on TTL”
>> 
>> Doesn’t the old sstables drop by itself? One ttl and gc grace seconds 
>> past whole sstable will have only tombstones.
>> 
>> 
>> Regards,
>> Nitan
>> Cell: 510 449 9629
>> 
>>> On Feb 11, 2019, at 2:23 PM, DuyHai Doan  wrote:
>>> 
>>> Purging data is also straightforward, just dropping SSTables (by a 
>>> script) where create date is older than a threshold, we don't even need 
>>> to rely on TTL
>>> 
>>> 
>>> -- 
>>> Jon Haddad
>>> http://www.rustyrazorblade.com
>>> twitter: rustyrazorblade
>> 
>> 
>> -- 
>> Akash


unsubscribe

2019-02-11 Thread Ben Standefer
unsubscribe

–Ben Standefer

Sent via Superhuman ( https://sprh.mn/?vip=benstande...@gmail.com )

Re: Max number of windows when using TWCS

2019-02-11 Thread Jeff Jirsa
It's probably not safe. You shouldn't touch the underlying sstables unless
you're very sure you know what you're doing.


On Mon, Feb 11, 2019 at 1:05 PM Akash Gangil  wrote:

> I have in the past tried to delete SSTables manually, but have noticed
> bits and pieces of that data still remain, even though the sstables of that
> window is deleted. So always wondered if playing directly with the
> underlying filesystem is a safe bet?
>
>
> On Mon, Feb 11, 2019 at 1:01 PM Jonathan Haddad  wrote:
>
>> Deleting SSTables manually can be useful if you don't know your TTL up
>> front.  For example, you have an ETL process that moves your raw Cassandra
>> data into S3 as parquet files, and you want to be sure that process is
>> completed before you delete the data.  You could also start out without
>> setting a TTL and later realize you need one.  This is a remarkably common
>> problem.
>>
>> On Mon, Feb 11, 2019 at 12:51 PM Nitan Kainth 
>> wrote:
>>
>>> Jeff,
>>>
>>> It means we have to delete sstables manually?
>>>
>>>
>>> Regards,
>>>
>>> Nitan
>>>
>>> Cell: 510 449 9629
>>>
>>> On Feb 11, 2019, at 2:40 PM, Jeff Jirsa  wrote:
>>>
>>> There's a bit of headache around overlapping sstables being strictly
>>> safe to delete.  https://issues.apache.org/jira/browse/CASSANDRA-13418
>>> was added to allow the "I know it's not technically safe, but just delete
>>> it anyway" use case. For a lot of people who started using TWCS before
>>> 13418, "stop cassandra, remove stuff we know is expired, start cassandra"
>>> is a not-uncommon pattern in very high-write, high-disk-space use cases.
>>>
>>>
>>>
>>> On Mon, Feb 11, 2019 at 12:34 PM Nitan Kainth 
>>> wrote:
>>>
 Hi,
 In regards to comment “Purging data is also straightforward, just
 dropping SSTables (by a script) where create date is older than a
 threshold, we don't even need to rely on TTL”

 Doesn’t the old sstables drop by itself? One ttl and gc grace seconds
 past whole sstable will have only tombstones.


 Regards,

 Nitan

 Cell: 510 449 9629

 On Feb 11, 2019, at 2:23 PM, DuyHai Doan  wrote:

 Purging data is also straightforward, just dropping SSTables (by a
 script) where create date is older than a threshold, we don't even need to
 rely on TTL


>>
>> --
>> Jon Haddad
>> http://www.rustyrazorblade.com
>> twitter: rustyrazorblade
>>
>
>
> --
> Akash
>


Re: Max number of windows when using TWCS

2019-02-11 Thread Akash Gangil
I have in the past tried to delete SSTables manually, but have noticed bits
and pieces of that data still remain, even though the sstables of that
window is deleted. So always wondered if playing directly with the
underlying filesystem is a safe bet?


On Mon, Feb 11, 2019 at 1:01 PM Jonathan Haddad  wrote:

> Deleting SSTables manually can be useful if you don't know your TTL up
> front.  For example, you have an ETL process that moves your raw Cassandra
> data into S3 as parquet files, and you want to be sure that process is
> completed before you delete the data.  You could also start out without
> setting a TTL and later realize you need one.  This is a remarkably common
> problem.
>
> On Mon, Feb 11, 2019 at 12:51 PM Nitan Kainth 
> wrote:
>
>> Jeff,
>>
>> It means we have to delete sstables manually?
>>
>>
>> Regards,
>>
>> Nitan
>>
>> Cell: 510 449 9629
>>
>> On Feb 11, 2019, at 2:40 PM, Jeff Jirsa  wrote:
>>
>> There's a bit of headache around overlapping sstables being strictly safe
>> to delete.  https://issues.apache.org/jira/browse/CASSANDRA-13418 was
>> added to allow the "I know it's not technically safe, but just delete it
>> anyway" use case. For a lot of people who started using TWCS before 13418,
>> "stop cassandra, remove stuff we know is expired, start cassandra" is a
>> not-uncommon pattern in very high-write, high-disk-space use cases.
>>
>>
>>
>> On Mon, Feb 11, 2019 at 12:34 PM Nitan Kainth 
>> wrote:
>>
>>> Hi,
>>> In regards to comment “Purging data is also straightforward, just
>>> dropping SSTables (by a script) where create date is older than a
>>> threshold, we don't even need to rely on TTL”
>>>
>>> Doesn’t the old sstables drop by itself? One ttl and gc grace seconds
>>> past whole sstable will have only tombstones.
>>>
>>>
>>> Regards,
>>>
>>> Nitan
>>>
>>> Cell: 510 449 9629
>>>
>>> On Feb 11, 2019, at 2:23 PM, DuyHai Doan  wrote:
>>>
>>> Purging data is also straightforward, just dropping SSTables (by a
>>> script) where create date is older than a threshold, we don't even need to
>>> rely on TTL
>>>
>>>
>
> --
> Jon Haddad
> http://www.rustyrazorblade.com
> twitter: rustyrazorblade
>


-- 
Akash


Re: Max number of windows when using TWCS

2019-02-11 Thread Jonathan Haddad
Deleting SSTables manually can be useful if you don't know your TTL up
front.  For example, you have an ETL process that moves your raw Cassandra
data into S3 as parquet files, and you want to be sure that process is
completed before you delete the data.  You could also start out without
setting a TTL and later realize you need one.  This is a remarkably common
problem.

On Mon, Feb 11, 2019 at 12:51 PM Nitan Kainth  wrote:

> Jeff,
>
> It means we have to delete sstables manually?
>
>
> Regards,
>
> Nitan
>
> Cell: 510 449 9629
>
> On Feb 11, 2019, at 2:40 PM, Jeff Jirsa  wrote:
>
> There's a bit of headache around overlapping sstables being strictly safe
> to delete.  https://issues.apache.org/jira/browse/CASSANDRA-13418 was
> added to allow the "I know it's not technically safe, but just delete it
> anyway" use case. For a lot of people who started using TWCS before 13418,
> "stop cassandra, remove stuff we know is expired, start cassandra" is a
> not-uncommon pattern in very high-write, high-disk-space use cases.
>
>
>
> On Mon, Feb 11, 2019 at 12:34 PM Nitan Kainth 
> wrote:
>
>> Hi,
>> In regards to comment “Purging data is also straightforward, just
>> dropping SSTables (by a script) where create date is older than a
>> threshold, we don't even need to rely on TTL”
>>
>> Doesn’t the old sstables drop by itself? One ttl and gc grace seconds
>> past whole sstable will have only tombstones.
>>
>>
>> Regards,
>>
>> Nitan
>>
>> Cell: 510 449 9629
>>
>> On Feb 11, 2019, at 2:23 PM, DuyHai Doan  wrote:
>>
>> Purging data is also straightforward, just dropping SSTables (by a
>> script) where create date is older than a threshold, we don't even need to
>> rely on TTL
>>
>>

-- 
Jon Haddad
http://www.rustyrazorblade.com
twitter: rustyrazorblade


Re: Max number of windows when using TWCS

2019-02-11 Thread Nitan Kainth
Jeff,

It means we have to delete sstables manually?


Regards,
Nitan
Cell: 510 449 9629

> On Feb 11, 2019, at 2:40 PM, Jeff Jirsa  wrote:
> 
> There's a bit of headache around overlapping sstables being strictly safe to 
> delete.  https://issues.apache.org/jira/browse/CASSANDRA-13418 was added to 
> allow the "I know it's not technically safe, but just delete it anyway" use 
> case. For a lot of people who started using TWCS before 13418, "stop 
> cassandra, remove stuff we know is expired, start cassandra" is a 
> not-uncommon pattern in very high-write, high-disk-space use cases. 
> 
> 
> 
>> On Mon, Feb 11, 2019 at 12:34 PM Nitan Kainth  wrote:
>> Hi,
>> In regards to comment “Purging data is also straightforward, just dropping 
>> SSTables (by a script) where create date is older than a threshold, we don't 
>> even need to rely on TTL”
>> 
>> Doesn’t the old sstables drop by itself? One ttl and gc grace seconds past 
>> whole sstable will have only tombstones.
>> 
>> 
>> Regards,
>> Nitan
>> Cell: 510 449 9629
>> 
>>> On Feb 11, 2019, at 2:23 PM, DuyHai Doan  wrote:
>>> 
>>> Purging data is also straightforward, just dropping SSTables (by a script) 
>>> where create date is older than a threshold, we don't even need to rely on 
>>> TTL


Re: Max number of windows when using TWCS

2019-02-11 Thread DuyHai Doan
thanks for the pointer Jeff

On Mon, Feb 11, 2019 at 9:40 PM Jeff Jirsa  wrote:

> There's a bit of headache around overlapping sstables being strictly safe
> to delete.  https://issues.apache.org/jira/browse/CASSANDRA-13418 was
> added to allow the "I know it's not technically safe, but just delete it
> anyway" use case. For a lot of people who started using TWCS before 13418,
> "stop cassandra, remove stuff we know is expired, start cassandra" is a
> not-uncommon pattern in very high-write, high-disk-space use cases.
>
>
>
> On Mon, Feb 11, 2019 at 12:34 PM Nitan Kainth 
> wrote:
>
>> Hi,
>> In regards to comment “Purging data is also straightforward, just
>> dropping SSTables (by a script) where create date is older than a
>> threshold, we don't even need to rely on TTL”
>>
>> Doesn’t the old sstables drop by itself? One ttl and gc grace seconds
>> past whole sstable will have only tombstones.
>>
>>
>> Regards,
>>
>> Nitan
>>
>> Cell: 510 449 9629
>>
>> On Feb 11, 2019, at 2:23 PM, DuyHai Doan  wrote:
>>
>> Purging data is also straightforward, just dropping SSTables (by a
>> script) where create date is older than a threshold, we don't even need to
>> rely on TTL
>>
>>


Re: Max number of windows when using TWCS

2019-02-11 Thread Jeff Jirsa
There's a bit of headache around overlapping sstables being strictly safe
to delete.  https://issues.apache.org/jira/browse/CASSANDRA-13418 was added
to allow the "I know it's not technically safe, but just delete it anyway"
use case. For a lot of people who started using TWCS before 13418, "stop
cassandra, remove stuff we know is expired, start cassandra" is a
not-uncommon pattern in very high-write, high-disk-space use cases.



On Mon, Feb 11, 2019 at 12:34 PM Nitan Kainth  wrote:

> Hi,
> In regards to comment “Purging data is also straightforward, just
> dropping SSTables (by a script) where create date is older than a
> threshold, we don't even need to rely on TTL”
>
> Doesn’t the old sstables drop by itself? One ttl and gc grace seconds past
> whole sstable will have only tombstones.
>
>
> Regards,
>
> Nitan
>
> Cell: 510 449 9629
>
> On Feb 11, 2019, at 2:23 PM, DuyHai Doan  wrote:
>
> Purging data is also straightforward, just dropping SSTables (by a script)
> where create date is older than a threshold, we don't even need to rely on
> TTL
>
>


Re: Max number of windows when using TWCS

2019-02-11 Thread Nitan Kainth
Hi,
In regards to comment “Purging data is also straightforward, just dropping 
SSTables (by a script) where create date is older than a threshold, we don't 
even need to rely on TTL”

Doesn’t the old sstables drop by itself? One ttl and gc grace seconds past 
whole sstable will have only tombstones.


Regards,
Nitan
Cell: 510 449 9629

> On Feb 11, 2019, at 2:23 PM, DuyHai Doan  wrote:
> 
> Purging data is also straightforward, just dropping SSTables (by a script) 
> where create date is older than a threshold, we don't even need to rely on TTL


Re: Max number of windows when using TWCS

2019-02-11 Thread DuyHai Doan
No worry for overlapping, the use-case is about events/timeseries and there
is almost no delay so it should be fine.

On the note-side, since we have the guarantee to have 1 SSTable/day of
ingestion, this is very easy to "emulate" incremental backup. You just need
to find the generated SSTable with the latest create date and back it up
every day at midnight with a script.

Purging data is also straightforward, just dropping SSTables (by a script)
where create date is older than a threshold, we don't even need to rely on
TTL



On Mon, Feb 11, 2019 at 9:19 PM Jeff Jirsa  wrote:

> Wild ass guess based on a large use case I knew about at the time
>
> If you go above that, I expect it’d largely be fine as long as you were
> sure they weren’t overlapping so reads only ever touched a small subset of
> the windows (ideally 1).
>
> If you have one day windows and every read touches all of the windows,
> you’re going to have a bad time.
>
> --
> Jeff Jirsa
>
>
> On Feb 11, 2019, at 12:12 PM, DuyHai Doan  wrote:
>
> Hello users
>
> On the official documentation for TWCS (
> http://cassandra.apache.org/doc/latest/operating/compaction.html#time-window-compactionstrategy)
> it is advised to select the windows unit and size so that the total number
> of windows intervals is around 20-30.
>
> Is there any explanation for this range of 20-30 ? What if we exceed this
> range, let's say having 1 day windows and keeping data for 1year, thus
> having indeed 356 intervals ? What can go wrong with this ?
>
> Regards
>
> Duy Hai DOAN
>
>


Re: Max number of windows when using TWCS

2019-02-11 Thread Jeff Jirsa
Wild ass guess based on a large use case I knew about at the time

If you go above that, I expect it’d largely be fine as long as you were sure 
they weren’t overlapping so reads only ever touched a small subset of the 
windows (ideally 1).

If you have one day windows and every read touches all of the windows, you’re 
going to have a bad time. 

-- 
Jeff Jirsa


> On Feb 11, 2019, at 12:12 PM, DuyHai Doan  wrote:
> 
> Hello users
> 
> On the official documentation for TWCS 
> (http://cassandra.apache.org/doc/latest/operating/compaction.html#time-window-compactionstrategy)
>  it is advised to select the windows unit and size so that the total number 
> of windows intervals is around 20-30.
> 
> Is there any explanation for this range of 20-30 ? What if we exceed this 
> range, let's say having 1 day windows and keeping data for 1year, thus having 
> indeed 356 intervals ? What can go wrong with this ?
> 
> Regards
> 
> Duy Hai DOAN


Max number of windows when using TWCS

2019-02-11 Thread DuyHai Doan
Hello users

On the official documentation for TWCS (
http://cassandra.apache.org/doc/latest/operating/compaction.html#time-window-compactionstrategy)
it is advised to select the windows unit and size so that the total number
of windows intervals is around 20-30.

Is there any explanation for this range of 20-30 ? What if we exceed this
range, let's say having 1 day windows and keeping data for 1year, thus
having indeed 356 intervals ? What can go wrong with this ?

Regards

Duy Hai DOAN


Re: Upgrade From 2.0 to 2.1

2019-02-11 Thread Michael Shuler
On 2/11/19 9:24 AM, shalom sagges wrote:
> I've successfully upgraded a 2.0 cluster to 2.1 on the way to upgrade to
> 3.11 (hopefully 3.11.4 if it'd be released very soon).

Very soon. If not today, it will be up tomorrow. :)

-- 
Michael

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Upgrade From 2.0 to 2.1

2019-02-11 Thread Jeff Jirsa
On Mon, Feb 11, 2019 at 7:24 AM shalom sagges 
wrote:

> Hi All,
>
> I've successfully upgraded a 2.0 cluster to 2.1 on the way to upgrade to
> 3.11 (hopefully 3.11.4 if it'd be released very soon).
>
> I have 2 small questions:
>
>1. Currently the Datastax clients are enforcing Protocol Version 2 to
>prevent mixed cluster issues. Do I need now to enforce Protocol Version 3
>while upgrading from 2.1 to 3.11 or can I still use Protocol Version 2?
>
> You'll need to go to v3 for 3.11. Congratulations on being aware enough to
do this - advanced upgrade coordination, it's absolutely the right thing to
do, but most people don't know it's possible or useful.

>
>1.
>2. After the upgrade, I found that system table NodeIdInfo has not
>been upgraded, i.e. I still see it in *-jb-* convention. Does this
>mean that this table is obsolete and can be removed?
>
> It is obsolete and can be removed.


>
> Thanks!
>
>
>


Local jmx changes get reverted after restart of a neighbouring node in Cassandra cluster

2019-02-11 Thread Rajsekhar Mallick
Hello Team,

I have been trying to use sjk tool/jmxterm jar utilities to change
compaction strategy of a table locally from STCS to LCS, without changing
the schema.
I have been trying this on a lower environment first before implementing
the same in production environment.
The change did work on one of the node. Autocompaction was triggered after
flush for the table.
After making changes on one node, I made the same changes on another node
in the cluster.
The change again went through. Then to verify,if local changes revert after
restart, I restarted one of 2 nodes where changes were made.
The change on that node got reverted, but the change also rolled back on
other node too (which wasn't restarted).
I did check for datastax blogs,but didn't find any such explainations.
Kindly help me understand why restart on one node would revert jmx local
changes made on another node.
Does a node restart in the cluster,trigger a schema update for the cluster?

Thanks,
Rajsekhar Mallick


Upgrade From 2.0 to 2.1

2019-02-11 Thread shalom sagges
Hi All,

I've successfully upgraded a 2.0 cluster to 2.1 on the way to upgrade to
3.11 (hopefully 3.11.4 if it'd be released very soon).

I have 2 small questions:

   1. Currently the Datastax clients are enforcing Protocol Version 2 to
   prevent mixed cluster issues. Do I need now to enforce Protocol Version 3
   while upgrading from 2.1 to 3.11 or can I still use Protocol Version 2?

   2. After the upgrade, I found that system table NodeIdInfo has not been
   upgraded, i.e. I still see it in *-jb-* convention. Does this mean that
   this table is obsolete and can be removed?


Thanks!


Re: Cassandra.log

2019-02-11 Thread Rahul Reddy
Thank you


On Sun, Feb 10, 2019, 5:46 PM Sri Rathan Rangisetti <
srirathan.rangise...@gmail.com> wrote:

> It will be part of cassandra startup script, of you are using RHEL its
> located at /etc/inti.d/cassandra
>
>
> Regards
> Sri Rathan
>
> On Sun, Feb 10, 2019, 2:45 PM Rahul Reddy 
> wrote:
>
>> Hello,
>>
>> I'm using Cassandra 3.11.1 and trying to change the name of
>> cassandra.log(which is generate during startup of Cassandra) file. Can
>> someone point me to configuration file where it is configured .
>> System.log/debug.log located in logback.xml but not the Cassandra.log.
>>
>>
>> Thanks
>>
>


Re: High GC pauses leading to client seeing impact

2019-02-11 Thread Elliott Sims
I would strongly suggest you consider an upgrade to 3.11.x.  I found it
decreased space needed by about 30% in addition to significantly lowering
GC.

As a first step, though, why not just revert to CMS for now if that was
working ok for you?  Then you can convert one host for diagnosis/tuning so
the cluster as a whole stays functional.

That's also a pretty old version of the JDK to be using G1.  I would
definitely upgrade that to 1.8u202 and see if the problem goes away.

On Sun, Feb 10, 2019, 10:22 PM Rajsekhar Mallick  Hello Team,
>
> I have a cluster of 17 nodes in production.(8 and 9 nodes in 2 DC).
> Cassandra version: 2.0.11
> Client connecting using thrift over port 9160
> Jdk version : 1.8.066
> GC used : G1GC (16GB heap)
> Other GC settings:
> Maxgcpausemillis=200
> Parallels gc threads=32
> Concurrent gc threads= 10
> Initiatingheapoccupancypercent=50
> Number of cpu cores for each system : 40
> Memory size: 185 GB
> Read/sec : 300 /sec on each node
> Writes/sec : 300/sec on each node
> Compaction strategy used : Size tiered compaction strategy
>
> Identified issues in the cluster:
> 1. Disk space usage across all nodes in the cluster is 80%. We are
> currently working on adding more storage on each node
> 2. There are 2 tables for which we keep on seeing large number of
> tombstones. One of table has read requests seeing 120 tombstones cells in
> last 5 mins as compared to 4 live cells. Tombstone warns and Error messages
> of query getting aborted is also seen.
>
> Current issue sen:
> 1. We keep on seeing GC pauses of few minutes randomly across nodes in the
> cluster. GC pauses of 120 seconds, even 770 seconds are also seen.
> 2. This leads to nodes getting stalled and client seeing direct impact
> 3. The GC pause we see, are not during any of G1GC phases. The GC log
> message prints “Time to stop threads took 770 seconds”. So it is not the
> garbage collector doing any work but stopping the threads at a safe point
> is taking so much of time.
> 4. This issue has surfaced recently after we changed 8GB(CMS) to
> 16GB(G1GC) across all nodes in the cluster.
>
> Kindly do help on the above issue. I am not able to exactly understand if
> the GC is wrongly tuned, other if this is something else.
>
> Thanks,
> Rajsekhar Mallick
>
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>