[ 
https://issues.apache.org/jira/browse/CASSANDRA-7808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14123239#comment-14123239
 ] 

Mikhail Panchenko commented on CASSANDRA-7808:
----------------------------------------------

Until 1.2.19 is out, is there a workaround for this? Is it safe to just leave 
it? Getting a very steady trickle on our errors graph from this on one of our 
nodes in a 1.2.16 cluster:
{noformat}
ERROR [CompactionExecutor:68695] 2014-09-05 16:35:17,875 CassandraDaemon.java 
(line 191) Exception in thread Thread[CompactionExecutor:68695,1,main]
java.lang.AssertionError: originally calculated column size of 92870522 but now 
it is 92870540
        at 
org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:135)
        at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:160)
        at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:162)
        at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
        at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
        at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58)
        at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
        at 
org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:442)
        at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
ERROR [BatchlogTasks:1] 2014-09-05 16:35:17,875 CassandraDaemon.java (line 191) 
Exception in thread Thread[BatchlogTasks:1,5,main]
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
java.lang.AssertionError: originally calculated column size of 92870522 but now 
it is 92870540
        at com.google.common.base.Throwables.propagate(Throwables.java:160)
        at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
        at 
org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:75)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: java.lang.AssertionError: 
originally calculated column size of 92870522 but now it is 92870540
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:188)
        at 
org.apache.cassandra.db.BatchlogManager.cleanup(BatchlogManager.java:353)
        at 
org.apache.cassandra.db.BatchlogManager.replayAllFailedBatches(BatchlogManager.java:201)
        at 
org.apache.cassandra.db.BatchlogManager$1.runMayThrow(BatchlogManager.java:98)
        at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
        ... 8 more
Caused by: java.lang.AssertionError: originally calculated column size of 
92870522 but now it is 92870540
        at 
org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:135)
        at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:160)
        at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:162)
        at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
        at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
        at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58)
        at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
        at 
org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:442)
        at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        ... 3 more
{noformat}

> LazilyCompactedRow incorrectly handles row tombstones
> -----------------------------------------------------
>
>                 Key: CASSANDRA-7808
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-7808
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: Richard Low
>            Assignee: Richard Low
>             Fix For: 1.2.19, 2.0.11, 2.1.0
>
>         Attachments: 7808-v1.diff
>
>
> LazilyCompactedRow doesn’t handle row tombstones correctly, leading to an 
> AssertionError (CASSANDRA-4206) in some cases, and the row tombstone being 
> incorrectly dropped in others. It looks like this was introduced by 
> CASSANDRA-5677.
> To reproduce an AssertionError:
> 1. Hack a really small return value for 
> DatabaseDescriptor.getInMemoryCompactionLimit() like 10 bytes to force large 
> row compaction
> 2. Create a column family with gc_grace = 10
> 3. Insert a few columns in one row
> 4. Call nodetool flush
> 5. Delete the row
> 6. Call nodetool flush
> 7. Wait 10 seconds
> 8. Call nodetool compact and it will fail
> To reproduce the row tombstone being dropped, do the same except, after the 
> delete (in step 5), insert a column that sorts before the ones you inserted 
> in step 3. E.g. if you inserted b, c, d in step 3, insert a now. After the 
> compaction, which now succeeds, the full row will be visible, rather than 
> just a.
> The problem is two fold. Firstly, LazilyCompactedRow.Reducer.reduce() and 
> getReduce() incorrectly call container.clear(). This clears the columns (as 
> intended) but also removes the deletion times from container. This means no 
> further columns are deleted if they are annihilated by the row tombstone.
> Secondly, after the second pass, LazilyCompactedRow.isEmpty() is called which 
> calls
> {{ColumnFamilyStore.removeDeletedCF(emptyColumnFamily, 
> controller.gcBefore(key.getToken()))}}
> which unfortunately removes the last deleted time from emptyColumnFamily if 
> it is earlier than gcBefore. Since this is only called after the second pass, 
> the second pass doesn’t remove any columns that are removed by the row 
> tombstone whereas the first pass removes just the first one.
> This is pretty serious - no large rows can ever be compacted and row 
> tombstones can go missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to