Paulo Motta commented on CASSANDRA-14092:

Thanks for the review, this took a bit longer than expected as I found an edge 
case which required a change from the previous approach as I describe later. 
First, see the follow-up from the previous round of review:
{quote}In the 3.0+ branches, 
ExpirationDateOverflowHandling::maybeApplyExpirationDateOverflowPolicy can use 
Cell.NO_TTL rather than 0 in the first check.
Agreed, fixed 
{quote}In 3.0+ you renamed the static policy field to have a shorter name, but 
missed that in the 2.1 & 2.2 branches.
Good catch, fixed 
{quote}In 2.1 I saw some (very) intermittent test failures in TTLTest. I 
instrumented checkTTLIsCapped to print out the (min | actual | max) TTLs to 
sysout and eventually managed to repro it. You can see from the output that in 
the first line, the min and actual are actually > the max, which caused the 
test to fail (this happened around 10% of the time).
Oh, that's funny! Perhaps it could be related to the platform, as I cannot 
reproduce this locally? In any case, I updated the check 
 to be more deterministic.

I also noticed that we weren't applying the expiration overflow policy in the 
CQLv2 interface, so I updated it 
 I find it hard someone is using this but I wouldn't be totally surprised.
{quote}I think we should have the scrub fix in 2.1 & 2.2 as some users have not 
yet moved to 3.x and they should probably get the chance to repair (maybe) 
their data if they want/are able to.
Agreed, as noted in the NEWS.txt, 2.1/2.2 is only affected if users have 
assertions disabled, but given that we have this 
 on 2.1 I wouldn't be surprised if some users with assertions disabled hit this.

While writing the 2.1 scrubber (which is slightly different from 3.0 due to 
CASSANDRA-8099), I found a nasty edge case with the recovery process. The 
problem is that, if the cell with negative/overflowed {{localExpirationTime}} 
[is converted to a tombstone during 
 we can't convert the tombstone back into an expired cell because the TTL value 
is lost and we can't differentiate this from an ordinary tombstone. 
Furthermore, even if the original SSTable is scrubbed and the negative 
{{localExpirationTime}} is converted to {{MAX_DELETION_TIME}}, if the tombstone 
was already generated, it will shadow/delete the fixed expired cell, because 
[deleted cells have precedence over live cells when the timestamp is the 

I updated {{recover_negative_expiration_date_sstables_with_scrub}} to [add an 
SSTable with the expired cell converted to a 
 and verified that an existing tombstone shadows recovered data in the current 

In order to fix this, the idea is during scrub increment the timestamp of cells 
with negative {{localExpirationTime}} by one in addition to setting the 
{{localExpirationTime}} to {{MAX_DELETION_TIME}}, so the recovered cell will 
shadow/supersede the potential tombstone that was already written.

Since this will not recreate the row, but technically reinsert the row with a 
higher timestamp, we cannot do this automatically on scrub, since it may 
overwrite an existing row inserted just one millisecond after the original row 
(while very hard to happen in practice, this may depend on application logic). 
For this reason, the user needs to specify the {{--reinsert-overflowed-ttl}} 
option during scrub to perform the recovery.

This approach is implemented 
 on 3.0 and ported to all other branches in a separate commit.
{quote}The last thing is about providing a route to fix up overflowed dates via 
scrub, I think we should definitely leave the remedial Scrubber code in trunk 
until we have a proper fix committed.
Since 4.0 was not released, and it will come out with this fix it's impossible 
that someone will hit this on 4.0, but I don't see any problem in keeping the 
scrub option for the lazy ones upgrading from 3.x without fixing their SSTables 
beforehand. :) For this reason I kept the 3.11 SSTables in the 4.X (d)tests.

To prevent scrub and the tests from throwing an {{AssertionError}} on 2.1 I 
removed [this 
 since it's no longer necessary given we apply the overflow policy.

Finally, I refactored the NEWS.txt section a bit to take [~KurtG] comments into 
account and also to give more emphasis to the expiration overflow policies 
since this will be what most users need to care about, and added a recovery 
subsection with detailed recovery instructions.

I now wonder if this notice has gotten to big and is cluttering the NEWS.txt 
file too much - what may confuse users looking for upgrade instructions - and 
we should maybe create a new WARNING/IMPORTANT file and add a pointer to it in 
the NEWS.txt file instead?

To facilitate review, I squashed the previous commits and created a new branch 
with new commits only representing changes since last review (except for the 
2.2 patch, which is basically the same as 2.1 except the additional commit 
simplifying TTLTest). The previous CI looked good, I will submit a new CI run 
and post the results here later.

> Max ttl of 20 years will overflow localDeletionTime
> ---------------------------------------------------
>                 Key: CASSANDRA-14092
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-14092
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: Paulo Motta
>            Assignee: Paulo Motta
>            Priority: Blocker
>             Fix For: 2.1.20, 2.2.12, 3.0.16, 3.11.2
> CASSANDRA-4771 added a max value of 20 years for ttl to protect against [year 
> 2038 overflow bug|https://en.wikipedia.org/wiki/Year_2038_problem] for 
> {{localDeletionTime}}.
> It turns out that next year the {{localDeletionTime}} will start overflowing 
> with the maximum ttl of 20 years ({{System.currentTimeMillis() + ttl(20 
> years) > Integer.MAX_VALUE}}), so we should remove this limitation.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org

Reply via email to