[jira] [Comment Edited] (CASSANDRA-9776) Tombstone Compaction Not Getting Triggered on a single SSTable even when the % of Droppable > 85%

2015-07-13 Thread Parth Setya (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14625782#comment-14625782
 ] 

Parth Setya edited comment on CASSANDRA-9776 at 7/14/15 5:38 AM:
-

[Edit: I also added unchecked_tombstone_compaction as true. No impact.]

I started a stable write load after major compaction. However, I do have some 
considerations here:
1. I set the value of tombstone_compaction_interval to 0 (assuming this does 
not disable Tombstone Compaction)
2. Also, I started the writes *after* the single large sstables (containing > 
85% tombstones) was created. 




was (Author: tehkidnextdoor):
[Edit: I also added *unchecked_tombstone_compaction* as true. No impact.]

I started a stable write load after major compaction. However, I do have some 
considerations here:
1. I set the value of tombstone_compaction_interval to 0 (assuming this does 
not disable Tombstone Compaction)
2. Also, I started the writes *after* the single large sstables (containing > 
85% tombstones) was created. 



> Tombstone Compaction Not Getting Triggered on a single SSTable even when the 
> % of Droppable > 85%
> -
>
> Key: CASSANDRA-9776
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9776
> Project: Cassandra
>  Issue Type: New Feature
> Environment: RHEL
> Apache Cassandra v 2.0.14
>Reporter: Parth Setya
> Fix For: 2.0.14
>
>
> The following steps can be used to Replicate the Issue:
> 1. Reduce gc_grace_seconds to 18000
> 2. Set tombstone_compaction_interval to 1 day
> 3. Insert 9 Million Rows
> 4. Delete 9 Million Rows
> 5. Insert 1 Million Rows
> 6. Run Major Compaction so that the Tombstones for 9 million rows and 
> expiring columns for 1 million rows are in the same SStable.(In may case the 
> size of the resultant sstable was 963M)
> Note: I also ran "nodetool cfstats" command and the found that the number of 
> keys in the CF to be 1000(as expected)
> Tombstone Compaction on the resultant SSTable should have been triggered 
> after 1 day after the creation of that Table but nothing happened for > 2 
> days.
> After 3 days I ran major compaction the size of the table was reduced to 213M.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9776) Tombstone Compaction Not Getting Triggered on a single SSTable even when the % of Droppable > 85%

2015-07-13 Thread Parth Setya (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14625782#comment-14625782
 ] 

Parth Setya edited comment on CASSANDRA-9776 at 7/14/15 5:37 AM:
-

[Edit: I also added *unchecked_tombstone_compaction* as true. No impact.]

I started a stable write load after major compaction. However, I do have some 
considerations here:
1. I set the value of tombstone_compaction_interval to 0 (assuming this does 
not disable Tombstone Compaction)
2. Also, I started the writes *after* the single large sstables (containing > 
85% tombstones) was created. 




was (Author: tehkidnextdoor):
I started a stable write load after major compaction. However, I do have some 
considerations here:
1. I set the value of tombstone_compaction_interval to 0 (assuming this does 
not disable Tombstone Compaction)
2. Also, I started the writes **after** the single large sstables (containing > 
85% tombstones) was created. 


> Tombstone Compaction Not Getting Triggered on a single SSTable even when the 
> % of Droppable > 85%
> -
>
> Key: CASSANDRA-9776
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9776
> Project: Cassandra
>  Issue Type: New Feature
> Environment: RHEL
> Apache Cassandra v 2.0.14
>Reporter: Parth Setya
> Fix For: 2.0.14
>
>
> The following steps can be used to Replicate the Issue:
> 1. Reduce gc_grace_seconds to 18000
> 2. Set tombstone_compaction_interval to 1 day
> 3. Insert 9 Million Rows
> 4. Delete 9 Million Rows
> 5. Insert 1 Million Rows
> 6. Run Major Compaction so that the Tombstones for 9 million rows and 
> expiring columns for 1 million rows are in the same SStable.(In may case the 
> size of the resultant sstable was 963M)
> Note: I also ran "nodetool cfstats" command and the found that the number of 
> keys in the CF to be 1000(as expected)
> Tombstone Compaction on the resultant SSTable should have been triggered 
> after 1 day after the creation of that Table but nothing happened for > 2 
> days.
> After 3 days I ran major compaction the size of the table was reduced to 213M.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9776) Tombstone Compaction Not Getting Triggered on a single SSTable even when the % of Droppable > 85%

2015-07-13 Thread Parth Setya (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14625782#comment-14625782
 ] 

Parth Setya commented on CASSANDRA-9776:


I started a stable write load after major compaction. However, I do have some 
considerations here:
1. I set the value of tombstone_compaction_interval to 0 (assuming this does 
not disable Tombstone Compaction)
2. Also, I started the writes **after** the single large sstables (containing > 
85% tombstones) was created. 


> Tombstone Compaction Not Getting Triggered on a single SSTable even when the 
> % of Droppable > 85%
> -
>
> Key: CASSANDRA-9776
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9776
> Project: Cassandra
>  Issue Type: New Feature
> Environment: RHEL
> Apache Cassandra v 2.0.14
>Reporter: Parth Setya
> Fix For: 2.0.14
>
>
> The following steps can be used to Replicate the Issue:
> 1. Reduce gc_grace_seconds to 18000
> 2. Set tombstone_compaction_interval to 1 day
> 3. Insert 9 Million Rows
> 4. Delete 9 Million Rows
> 5. Insert 1 Million Rows
> 6. Run Major Compaction so that the Tombstones for 9 million rows and 
> expiring columns for 1 million rows are in the same SStable.(In may case the 
> size of the resultant sstable was 963M)
> Note: I also ran "nodetool cfstats" command and the found that the number of 
> keys in the CF to be 1000(as expected)
> Tombstone Compaction on the resultant SSTable should have been triggered 
> after 1 day after the creation of that Table but nothing happened for > 2 
> days.
> After 3 days I ran major compaction the size of the table was reduced to 213M.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9776) Tombstone Compaction Not Getting Triggered on a single SSTable even when the % of Droppable > 85%

2015-07-09 Thread Parth Setya (JIRA)
Parth Setya created CASSANDRA-9776:
--

 Summary: Tombstone Compaction Not Getting Triggered on a single 
SSTable even when the % of Droppable > 85%
 Key: CASSANDRA-9776
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9776
 Project: Cassandra
  Issue Type: New Feature
 Environment: RHEL
Apache Cassandra v 2.0.14
Reporter: Parth Setya
 Fix For: 2.0.14


The following steps can be used to Replicate the Issue:
1. Reduce gc_grace_seconds to 18000
2. Set tombstone_compaction_interval to 1 day
3. Insert 9 Million Rows
4. Delete 9 Million Rows
5. Insert 1 Million Rows
6. Run Major Compaction so that the Tombstones for 9 million rows and expiring 
columns for 1 million rows are in the same SStable.(In may case the size of the 
resultant sstable was 963M)

Note: I also ran "nodetool cfstats" command and the found that the number of 
keys in the CF to be 1000(as expected)

Tombstone Compaction on the resultant SSTable should have been triggered after 
1 day after the creation of that Table but nothing happened for > 2 days.
After 3 days I ran major compaction the size of the table was reduced to 213M.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-8479) Timeout Exception on Node Failure in Remote Data Center

2014-12-29 Thread Parth Setya (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Setya updated CASSANDRA-8479:
---
Comment: was deleted

(was: We are working on getting the trace level logs. Meanwhile can you comment 
on the following?
We are currently using hector(1.1.0.E001) api to query data from C*. Do you 
think this can be a Hector related Issue?
Which client did you use when you tried to reproduce the issue?)

> Timeout Exception on Node Failure in Remote Data Center
> ---
>
> Key: CASSANDRA-8479
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8479
> Project: Cassandra
>  Issue Type: Bug
>  Components: API, Core, Tools
> Environment: Unix, Cassandra 2.0.11
>Reporter: Amit Singh Chowdhery
>Assignee: Ryan McGuire
>Priority: Minor
>
> Issue Faced :
> We have a Geo-red setup with 2 Data centers having 3 nodes each. When we 
> bring down a single Cassandra node down in DC2 by kill -9 , 
> reads fail on DC1 with TimedOutException for a brief amount of time (15-20 
> sec~).
> Reference :
> Already a ticket has been opened/resolved and link is provided below :
> https://issues.apache.org/jira/browse/CASSANDRA-8352
> Activity Done as per Resolution Provided :
> Upgraded to Cassandra 2.0.11 .
> We have two 3 node clusters in two different DCs and if one or more of the 
> nodes go down in one Data Center , ~5-10% traffic failure is observed on the 
> other.
> CL: LOCAL_QUORUM
> RF=3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8479) Timeout Exception on Node Failure in Remote Data Center

2014-12-29 Thread Parth Setya (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14260043#comment-14260043
 ] 

Parth Setya commented on CASSANDRA-8479:


We are working on getting the trace level logs. Meanwhile can you comment on 
the following?
We are currently using hector(1.1.0.E001) api to query data from C*. Do you 
think this can be a Hector related Issue?
Which client did you use when you tried to reproduce the issue?

> Timeout Exception on Node Failure in Remote Data Center
> ---
>
> Key: CASSANDRA-8479
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8479
> Project: Cassandra
>  Issue Type: Bug
>  Components: API, Core, Tools
> Environment: Unix, Cassandra 2.0.11
>Reporter: Amit Singh Chowdhery
>Assignee: Ryan McGuire
>Priority: Minor
>
> Issue Faced :
> We have a Geo-red setup with 2 Data centers having 3 nodes each. When we 
> bring down a single Cassandra node down in DC2 by kill -9 , 
> reads fail on DC1 with TimedOutException for a brief amount of time (15-20 
> sec~).
> Reference :
> Already a ticket has been opened/resolved and link is provided below :
> https://issues.apache.org/jira/browse/CASSANDRA-8352
> Activity Done as per Resolution Provided :
> Upgraded to Cassandra 2.0.11 .
> We have two 3 node clusters in two different DCs and if one or more of the 
> nodes go down in one Data Center , ~5-10% traffic failure is observed on the 
> other.
> CL: LOCAL_QUORUM
> RF=3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8479) Timeout Exception on Node Failure in Remote Data Center

2014-12-29 Thread Parth Setya (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14260042#comment-14260042
 ] 

Parth Setya commented on CASSANDRA-8479:


We are working on getting the trace level logs. Meanwhile can you comment on 
the following?
We are currently using hector(1.1.0.E001) api to query data from C*. Do you 
think this can be a Hector related Issue?
Which client did you use when you tried to reproduce the issue?

> Timeout Exception on Node Failure in Remote Data Center
> ---
>
> Key: CASSANDRA-8479
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8479
> Project: Cassandra
>  Issue Type: Bug
>  Components: API, Core, Tools
> Environment: Unix, Cassandra 2.0.11
>Reporter: Amit Singh Chowdhery
>Assignee: Ryan McGuire
>Priority: Minor
>
> Issue Faced :
> We have a Geo-red setup with 2 Data centers having 3 nodes each. When we 
> bring down a single Cassandra node down in DC2 by kill -9 , 
> reads fail on DC1 with TimedOutException for a brief amount of time (15-20 
> sec~).
> Reference :
> Already a ticket has been opened/resolved and link is provided below :
> https://issues.apache.org/jira/browse/CASSANDRA-8352
> Activity Done as per Resolution Provided :
> Upgraded to Cassandra 2.0.11 .
> We have two 3 node clusters in two different DCs and if one or more of the 
> nodes go down in one Data Center , ~5-10% traffic failure is observed on the 
> other.
> CL: LOCAL_QUORUM
> RF=3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8405) Is there a way to override the current MAX_TTL value from 20 yrs to a value > 20 yrs.

2014-12-04 Thread Parth Setya (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14234984#comment-14234984
 ] 

Parth Setya commented on CASSANDRA-8405:


Thanks for response. Yes I think we can do that but then we will not be able to 
utilize the "Auto Purge and Auto Deletion"(Data is truncated automatically when 
the ttl is reached) property if we do that. 
Our Api has been made with the assumption that the data that is expired will be 
deleted automatically.


> Is there a way to override the current MAX_TTL value from 20 yrs to a value > 
> 20 yrs.
> -
>
> Key: CASSANDRA-8405
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8405
> Project: Cassandra
>  Issue Type: Wish
>  Components: Core
> Environment: Linux(RH)
>Reporter: Parth Setya
>Priority: Blocker
>  Labels: MAX_TTL, date, expiration, ttl
>
> We are migrating data from Oracle to C*.
> The expiration date for a certain column was set to 90 years in Oracle.
> Here we are not able to make that value go beyond 20 years.
> Could reccomend a way to override this value?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8405) Is there a way to override the current MAX_TTL value from 20 yrs to a value > 20 yrs.

2014-12-03 Thread Parth Setya (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14233970#comment-14233970
 ] 

Parth Setya edited comment on CASSANDRA-8405 at 12/4/14 7:12 AM:
-

Thanks [~slebresne]. The response was prompt and is logical. However, our use 
case is of a slightly different nature.
Our Use Case:
We keep a track of that ttl (expiry date in our case) and generate an event(s) 
base on its value and once the ttl is reached.

Could recommend a solution to this?



was (Author: tehkidnextdoor):
Thanks. The response was prompt and is logical. However, our use case is of a 
slightly different nature.
Our Use Case:
We keep a track of that ttl (expiry date in our case) and generate an event(s) 
base on its value and once the ttl is reached.

Could recommend a solution to this?


> Is there a way to override the current MAX_TTL value from 20 yrs to a value > 
> 20 yrs.
> -
>
> Key: CASSANDRA-8405
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8405
> Project: Cassandra
>  Issue Type: Wish
>  Components: Core
> Environment: Linux(RH)
>Reporter: Parth Setya
>Priority: Blocker
>  Labels: MAX_TTL, date, expiration, ttl
>
> We are migrating data from Oracle to C*.
> The expiration date for a certain column was set to 90 years in Oracle.
> Here we are not able to make that value go beyond 20 years.
> Could reccomend a way to override this value?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8405) Is there a way to override the current MAX_TTL value from 20 yrs to a value > 20 yrs.

2014-12-03 Thread Parth Setya (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14233970#comment-14233970
 ] 

Parth Setya commented on CASSANDRA-8405:


Thanks. The response was prompt and is logical. However, our use case is of a 
slightly different nature.
Our Use Case:
We keep a track of that ttl (expiry date in our case) and generate an event(s) 
base on its value and once the ttl is reached.

Could recommend a solution to this?


> Is there a way to override the current MAX_TTL value from 20 yrs to a value > 
> 20 yrs.
> -
>
> Key: CASSANDRA-8405
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8405
> Project: Cassandra
>  Issue Type: Wish
>  Components: Core
> Environment: Linux(RH)
>Reporter: Parth Setya
>Priority: Blocker
>  Labels: MAX_TTL, date, expiration, ttl
>
> We are migrating data from Oracle to C*.
> The expiration date for a certain column was set to 90 years in Oracle.
> Here we are not able to make that value go beyond 20 years.
> Could reccomend a way to override this value?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8405) Is there a way to override the current MAX_TTL value from 20 yrs to a value > 20 yrs.

2014-12-02 Thread Parth Setya (JIRA)
Parth Setya created CASSANDRA-8405:
--

 Summary: Is there a way to override the current MAX_TTL value from 
20 yrs to a value > 20 yrs.
 Key: CASSANDRA-8405
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8405
 Project: Cassandra
  Issue Type: Wish
  Components: Core
 Environment: Linux(RH)
Reporter: Parth Setya
Priority: Blocker


We are migrating data from Oracle to C*.
The expiration date for a certain column was set to 90 years in Oracle.
Here we are not able to make that value go beyond 20 years.

Could reccomend a way to override this value?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)