[
https://issues.apache.org/jira/browse/CASSANDRA-7808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14259490#comment-14259490
]
Fahd Siddiqui edited comment on CASSANDRA-7808 at 12/27/14 10:27 PM:
---------------------------------------------------------------------
We are on Cassandra 1.2.19, but still seeing this error in Hints column family:
{code}
ERROR [HintedHandoff:3] 2014-12-27 20:53:16,742 CassandraDaemon.java (line 191)
Exception in thread Thread[HintedHandoff:3,1,main]
java.lang.RuntimeException: java.util.concurrent.ExecutionException:
java.lang.AssertionError: originally calculated column size of 5324 but now it
is 372186
at
org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:429)
at
org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:280)
at
org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:88)
at
org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:495)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: java.lang.AssertionError:
originally calculated column size of 5324 but now it is 372186
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
at
org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:425)
... 6 more
Caused by: java.lang.AssertionError: originally calculated column size of 5324
but now it is 372186
at
org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:135)
at
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:160)
at
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:162)
at
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58)
at
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
at
org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:442)
at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
{code}
was (Author: fahdsiddiqui):
We are on Cassandra 1.2.19, but still seeing this error in Hints column family:
ERROR [HintedHandoff:3] 2014-12-27 20:53:16,742 CassandraDaemon.java (line 191)
Exception in thread Thread[HintedHandoff:3,1,main]
java.lang.RuntimeException: java.util.concurrent.ExecutionException:
java.lang.AssertionError: originally calculated column size of 5324 but now it
is 372186
at
org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:429)
at
org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:280)
at
org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:88)
at
org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:495)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: java.lang.AssertionError:
originally calculated column size of 5324 but now it is 372186
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
at
org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:425)
... 6 more
Caused by: java.lang.AssertionError: originally calculated column size of 5324
but now it is 372186
at
org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:135)
at
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:160)
at
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:162)
at
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58)
at
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
at
org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:442)
at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> LazilyCompactedRow incorrectly handles row tombstones
> -----------------------------------------------------
>
> Key: CASSANDRA-7808
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7808
> Project: Cassandra
> Issue Type: Bug
> Components: Core
> Reporter: Richard Low
> Assignee: Richard Low
> Fix For: 1.2.19, 2.0.11, 2.1.0
>
> Attachments: 7808-v1.diff
>
>
> LazilyCompactedRow doesn’t handle row tombstones correctly, leading to an
> AssertionError (CASSANDRA-4206) in some cases, and the row tombstone being
> incorrectly dropped in others. It looks like this was introduced by
> CASSANDRA-5677.
> To reproduce an AssertionError:
> 1. Hack a really small return value for
> DatabaseDescriptor.getInMemoryCompactionLimit() like 10 bytes to force large
> row compaction
> 2. Create a column family with gc_grace = 10
> 3. Insert a few columns in one row
> 4. Call nodetool flush
> 5. Delete the row
> 6. Call nodetool flush
> 7. Wait 10 seconds
> 8. Call nodetool compact and it will fail
> To reproduce the row tombstone being dropped, do the same except, after the
> delete (in step 5), insert a column that sorts before the ones you inserted
> in step 3. E.g. if you inserted b, c, d in step 3, insert a now. After the
> compaction, which now succeeds, the full row will be visible, rather than
> just a.
> The problem is two fold. Firstly, LazilyCompactedRow.Reducer.reduce() and
> getReduce() incorrectly call container.clear(). This clears the columns (as
> intended) but also removes the deletion times from container. This means no
> further columns are deleted if they are annihilated by the row tombstone.
> Secondly, after the second pass, LazilyCompactedRow.isEmpty() is called which
> calls
> {{ColumnFamilyStore.removeDeletedCF(emptyColumnFamily,
> controller.gcBefore(key.getToken()))}}
> which unfortunately removes the last deleted time from emptyColumnFamily if
> it is earlier than gcBefore. Since this is only called after the second pass,
> the second pass doesn’t remove any columns that are removed by the row
> tombstone whereas the first pass removes just the first one.
> This is pretty serious - no large rows can ever be compacted and row
> tombstones can go missing.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)