[ 
https://issues.apache.org/jira/browse/CASSANDRA-10253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14732213#comment-14732213
 ] 

vijay commented on CASSANDRA-10253:
-----------------------------------

Also compactions are never stoped with no data ingestion:
please find below results with no data ingestion and no curd operations over 
period of 3 days:

nodetool compactionstats
pending tasks: 4617
   compaction type   keyspace       table    completed        total    unit   
progress
        Compaction       upc1   alarmnote    419486529    421927219   bytes     
99.42%
        Compaction       upc1   alarmnote    262150486    657730463   bytes     
39.86%
        Compaction       upc1   alarmnote     52429308    329877089   bytes     
15.89%
        Compaction       upc1   alarmnote   1149647113   3655964819   bytes     
31.45%
Active compaction remaining time :   0h00m47s

nodetool compactionstats
pending tasks: 14068
   compaction type   keyspace       table   completed        total    unit   
progress
        Compaction       upc1   alarmnote   104863849    516187168   bytes     
20.32%
        Compaction       upc1   alarmnote   576771960   3541850604   bytes     
16.28%
        Compaction       upc1   alarmnote   209717447    542218900   bytes     
38.68%
Active compaction remaining time :   0h00m55s
i had seen pending tasks: goes to 300,000 at times

> Incremental repairs not working as expected with DTCS
> -----------------------------------------------------
>
>                 Key: CASSANDRA-10253
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-10253
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>         Environment: Pre-prod
>            Reporter: vijay
>            Assignee: Marcus Eriksson
>              Labels: dtcs
>             Fix For: 2.1.x
>
>         Attachments: sstablemetadata-cluster-logs.zip, systemfiles 2.zip
>
>
> HI,
> we are ingesting data 6 million records every 15 mins into one DTCS table and 
> relaying on Cassandra for purging the data.Table Schema given below, Issue 1: 
> we are expecting to see table sstable created on day d1 will not be compacted 
> after d1 how we are not seeing this, how ever i see some data being purged at 
> random intervals
> Issue 2: when we run incremental repair using "nodetool repair keyspace table 
> -inc -pr" each sstable is splitting up to multiple smaller SStables and 
> increasing the total storage.This behavior is same running repairs on any 
> node and any number of times
> There are mutation drop's in the cluster
> Table:
> {code}
> CREATE TABLE TableA (
>     F1 text,
>     F2 int,
>     createts bigint,
>     stats blob,
>     PRIMARY KEY ((F1,F2), createts)
> ) WITH CLUSTERING ORDER BY (createts DESC)
>     AND bloom_filter_fp_chance = 0.01
>     AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
>     AND comment = ''
>     AND compaction = {'min_threshold': '12', 'max_sstable_age_days': '1', 
> 'base_time_seconds': '50', 'class': 
> 'org.apache.cassandra.db.compaction.DateTieredCompactionStrategy'}
>     AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
>     AND dclocal_read_repair_chance = 0.0
>     AND default_time_to_live = 93600
>     AND gc_grace_seconds = 3600
>     AND max_index_interval = 2048
>     AND memtable_flush_period_in_ms = 0
>     AND min_index_interval = 128
>     AND read_repair_chance = 0.0
>     AND speculative_retry = '99.0PERCENTILE';
> {code}
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to