> The long term solution would be to work with a long instead of a int. The
serialized seems to be a variable-int already, so that should be fine
already.

Agreed but apparently it needs a new sstable format as well as
mentioned on CASSANDRA-14092.

> If you change the assertion to 15 years, then applications might fail, as
they might be setting a 15+ year ttl.

This is an emergency measure while we provide a longer term fix. Any
application using TTL ~= 20 years will need to be lower the TTL anyway
to prevent data loss.

2018-01-25 18:40 GMT-02:00 Brandon Williams <dri...@gmail.com>:
> My guess is they don't know how to NOT set a TTL (perhaps with a default in
> the schema), so they chose max value.  Someone else's problem by then.
>
> On Thu, Jan 25, 2018 at 2:38 PM, Michael Kjellman <kjell...@apple.com>
> wrote:
>
>> why are people inserting data with a 15+ year TTL? sorta curious about the
>> actual use case for that.
>>
>> > On Jan 25, 2018, at 12:36 PM, horschi <hors...@gmail.com> wrote:
>> >
>> > The assertion was working fine until yesterday 03:14 UTC.
>> >
>> > The long term solution would be to work with a long instead of a int. The
>> > serialized seems to be a variable-int already, so that should be fine
>> > already.
>> >
>> > If you change the assertion to 15 years, then applications might fail, as
>> > they might be setting a 15+ year ttl.
>> >
>> > regards,
>> > Christian
>> >
>> > On Thu, Jan 25, 2018 at 9:19 PM, Paulo Motta <pauloricard...@gmail.com>
>> > wrote:
>> >
>> >> Thanks for raising this. Agreed this is bad, when I filed
>> >> CASSANDRA-14092 I thought a write would fail when localDeletionTime
>> >> overflows (as it is with 2.1), but that doesn't seem to be the case on
>> >> 3.0+
>> >>
>> >> I propose adding the assertion back so writes will fail, and reduce
>> >> the max TTL to something like 15 years for the time being while we
>> >> figure a long term solution.
>> >>
>> >> 2018-01-25 18:05 GMT-02:00 Jeremiah D Jordan <jeremiah.jor...@gmail.com
>> >:
>> >>> If you aren’t getting an error, then I agree, that is very bad.
>> Looking
>> >> at the 3.0 code it looks like the assertion checking for overflow was
>> >> dropped somewhere along the way, I had only been looking into 2.1 where
>> you
>> >> get an assertion error that fails the query.
>> >>>
>> >>> -Jeremiah
>> >>>
>> >>>> On Jan 25, 2018, at 2:21 PM, Anuj Wadehra <anujw_2...@yahoo.co.in.
>> INVALID>
>> >> wrote:
>> >>>>
>> >>>>
>> >>>> Hi Jeremiah,
>> >>>> Validation is on TTL value not on (system_time+ TTL). You can test it
>> >> with below example. Insert is successful, overflow happens silently and
>> >> data is lost:
>> >>>> create table test(name text primary key,age int);
>> >>>> insert into test(name,age) values('test_20yrs',30) USING TTL
>> 630720000;
>> >>>> select * from test where name='test_20yrs';
>> >>>>
>> >>>> name | age
>> >>>> ------+-----
>> >>>>
>> >>>> (0 rows)
>> >>>>
>> >>>> insert into test(name,age) values('test_20yr_plus_1',30) USING TTL
>> >> 630720001;InvalidRequest: Error from server: code=2200 [Invalid query]
>> >> message="ttl is too large. requested (630720001) maximum (630720000)"
>> >>>> ThanksAnuj
>> >>>>   On Friday 26 January 2018, 12:11:03 AM IST, J. D. Jordan <
>> >> jeremiah.jor...@gmail.com> wrote:
>> >>>>
>> >>>> Where is the dataloss?  Does the INSERT operation return successfully
>> >> to the client in this case?  From reading the linked issues it sounds
>> like
>> >> you get an error client side.
>> >>>>
>> >>>> -Jeremiah
>> >>>>
>> >>>>> On Jan 25, 2018, at 1:24 PM, Anuj Wadehra <anujw_2...@yahoo.co.in.
>> INVALID>
>> >> wrote:
>> >>>>>
>> >>>>> Hi,
>> >>>>>
>> >>>>> For all those people who use MAX TTL=20 years for inserting/updating
>> >> data in production, https://issues.apache.org/
>> jira/browse/CASSANDRA-14092
>> >> can silently cause irrecoverable Data Loss. This seems like a certain
>> TOP
>> >> MOST BLOCKER to me. I think the category of the JIRA must be raised to
>> >> BLOCKER from Major. Unfortunately, the JIRA is still "Unassigned" and no
>> >> one seems to be actively working on it. Just like any other critical
>> >> vulnerability, this vulnerability demands immediate attention from some
>> >> very experienced folks to bring out an Urgent Fast Track Patch for all
>> >> currently Supported Cassandra versions 2.1,2.2 and 3.x. As per my
>> >> understanding of the JIRA comments, the changes may not be that trivial
>> for
>> >> older releases. So, community support on the patch is very much
>> appreciated.
>> >>>>>
>> >>>>> Thanks
>> >>>>> Anuj
>> >>>>
>> >>>> ---------------------------------------------------------------------
>> >>>> To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
>> >>>> For additional commands, e-mail: dev-h...@cassandra.apache.org
>> >>>
>> >>>
>> >>> ---------------------------------------------------------------------
>> >>> To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
>> >>> For additional commands, e-mail: dev-h...@cassandra.apache.org
>> >>>
>> >>
>> >> ---------------------------------------------------------------------
>> >> To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
>> >> For additional commands, e-mail: dev-h...@cassandra.apache.org
>> >>
>> >>
>>
>>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
For additional commands, e-mail: dev-h...@cassandra.apache.org

Reply via email to