Hi Jeremiah,
Validation is on TTL value not on (system_time+ TTL). You can test it with 
below example. Insert is successful, overflow happens silently and data is lost:
create table test(name text primary key,age int);
insert into test(name,age) values('test_20yrs',30) USING TTL 630720000;
select * from test where name='test_20yrs';

 name | age
------+-----

(0 rows)

insert into test(name,age) values('test_20yr_plus_1',30) USING TTL 
630720001;InvalidRequest: Error from server: code=2200 [Invalid query] 
message="ttl is too large. requested (630720001) maximum (630720000)"
ThanksAnuj
    On Friday 26 January 2018, 12:11:03 AM IST, J. D. Jordan 
<jeremiah.jor...@gmail.com> wrote:  
 
 Where is the dataloss?  Does the INSERT operation return successfully to the 
client in this case?  From reading the linked issues it sounds like you get an 
error client side.

-Jeremiah

> On Jan 25, 2018, at 1:24 PM, Anuj Wadehra <anujw_2...@yahoo.co.in.INVALID> 
> wrote:
> 
> Hi,
> 
> For all those people who use MAX TTL=20 years for inserting/updating data in 
> production, https://issues.apache.org/jira/browse/CASSANDRA-14092 can 
> silently cause irrecoverable Data Loss. This seems like a certain TOP MOST 
> BLOCKER to me. I think the category of the JIRA must be raised to BLOCKER 
> from Major. Unfortunately, the JIRA is still "Unassigned" and no one seems to 
> be actively working on it. Just like any other critical vulnerability, this 
> vulnerability demands immediate attention from some very experienced folks to 
> bring out an Urgent Fast Track Patch for all currently Supported Cassandra 
> versions 2.1,2.2 and 3.x. As per my understanding of the JIRA comments, the 
> changes may not be that trivial for older releases. So, community support on 
> the patch is very much appreciated. 
> 
> Thanks
> Anuj

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
For additional commands, e-mail: dev-h...@cassandra.apache.org
  

Reply via email to