[
https://issues.apache.org/jira/browse/CASSANDRA-1702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12931822#action_12931822
]
Jonathan Ellis commented on CASSANDRA-1702:
-------------------------------------------
bq. If you wrapped the compaction read path (mostly inside iterators) with a
known (runtime?) exception, you could differentiate that way
started out doing that (still might) but for now I'm going to revert this.
feels like silently dropping data on the floor is the wrong thing to do. let's
make a separate utility that can expunge corrupt rows from individual sstables,
if/when we need that.
> handle skipping bad rows in LazilyCompacted path
> ------------------------------------------------
>
> Key: CASSANDRA-1702
> URL: https://issues.apache.org/jira/browse/CASSANDRA-1702
> Project: Cassandra
> Issue Type: Improvement
> Components: Core
> Affects Versions: 0.7 beta 1
> Reporter: Jonathan Ellis
> Assignee: Jonathan Ellis
> Priority: Minor
> Fix For: 0.7.0
>
> Attachments: 1702.txt
>
>
> it's easy to handle skipping bad rows during compation in the PreCompacted
> (merged-in-memory) path and we have done this for a long time. It is harder
> in the LazilyCompacted path since we have already started writing data when
> we discover that some of the source rows cannot be deserialized. This adds
> mark/reset to SSTableWriter so compaction can skip back to the beginning in
> these circumstances.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.