RE: TWCS on Non TTL Data

2021-09-14 Thread Isaeed Mohanna
My cluster column is the time series timestamp, so basically sourceId, metric type for partition key and timestamp for the clustering key the rest of the fields are just values outside of the primary key. Our reads request are simply give me values for a time range of a specific sourceId,Metric

Re: COUNTER timeout

2021-09-14 Thread Erick Ramirez
The obvious conclusion is to say that the nodes can't keep up so it would be interesting to know how often you're issuing the counter updates. Also, how are the commit log disks performing on the nodes? If you have monitoring in place, check the IO stats/metrics. And finally, review the logs on

COUNTER timeout

2021-09-14 Thread Joe Obernberger
I'm getting a lot of the following errors during ingest of data: com.datastax.oss.driver.api.core.servererrors.WriteTimeoutException: Cassandra timeout during COUNTER write query at consistency ONE (1 replica were required but only 0 acknowledged the write)     at

Re: TWCS on Non TTL Data

2021-09-14 Thread Jeff Jirsa
Inline On Tue, Sep 14, 2021 at 11:47 AM Isaeed Mohanna wrote: > Hi Jeff > > My data is partitioned by a sourceId and metric, a source is usually > active up to a year after which there is no additional writes for the > partition, and reads become scarce, so although this is not an explicit >

RE: TWCS on Non TTL Data

2021-09-14 Thread Isaeed Mohanna
Hi Jeff My data is partitioned by a sourceId and metric, a source is usually active up to a year after which there is no additional writes for the partition, and reads become scarce, so although this is not an explicit time component, its time based, will that suffice? If I use a week bucket

Re: Change of Cassandra TTL

2021-09-14 Thread raman gugnani
Thanks Eric for the update. On Tue, 14 Sept 2021 at 16:50, Erick Ramirez wrote: > You'll need to write an ETL app (most common case is with Spark) to scan > through the existing data and update it with a new TTL. You'll need to make > sure that the ETL job is throttled down so it doesn't

Re: TWCS on Non TTL Data

2021-09-14 Thread Jeff Jirsa
On Tue, Sep 14, 2021 at 5:42 AM Isaeed Mohanna wrote: > Hi > > I have a table that stores time series data, the data is not TTLed since > we want to retain the data for the foreseeable future, and there are no > updates or deletes. (deletes could happens rarely in case some scrambled > data

TWCS on Non TTL Data

2021-09-14 Thread Isaeed Mohanna
Hi I have a table that stores time series data, the data is not TTLed since we want to retain the data for the foreseeable future, and there are no updates or deletes. (deletes could happens rarely in case some scrambled data reached the table, but its extremely rare). Usually we do constant

Re: Change of Cassandra TTL

2021-09-14 Thread Erick Ramirez
You'll need to write an ETL app (most common case is with Spark) to scan through the existing data and update it with a new TTL. You'll need to make sure that the ETL job is throttled down so it doesn't overload your production cluster. Cheers! >

Change of Cassandra TTL

2021-09-14 Thread raman gugnani
HI all, 1. I have a table with default_time_to_live = 31536000 (1 year) . We want it to reduce the value to 7884000 (3 months). If we alter the table , is there a way to update the existing data? 1. I have a table without TTL we want to add TTL = 7884000 (3 months) on the table. If we alter