[
https://issues.apache.org/jira/browse/CASSANDRA-1040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jonathan Ellis updated CASSANDRA-1040:
--------------------------------------
Attachment: 1040.txt
Most of this was caused by the bug Stu found for CASSANDRA-1063, which has been
committed separately. Here is a patch to fix the trunk-only part explained
above. (We take the "allow the original memtable to scanned twice
occasionally" approach, which is the one taken by getTopLevelColumns.)
> read failure during flush
> -------------------------
>
> Key: CASSANDRA-1040
> URL: https://issues.apache.org/jira/browse/CASSANDRA-1040
> Project: Cassandra
> Issue Type: Bug
> Components: Core
> Affects Versions: 0.7
> Reporter: Jonathan Ellis
> Assignee: Jonathan Ellis
> Priority: Critical
> Fix For: 0.7
>
> Attachments: 1040.txt
>
>
> Joost Ouwerkerk writes:
>
> On a single-node cassandra cluster with basic config (-Xmx:1G)
> loop {
> * insert 5,000 records in a single columnfamily with UUID keys and
> random string values (between 1 and 1000 chars) in 5 different columns
> spanning two different supercolumns
> * delete all the data by iterating over the rows with
> get_range_slices(ONE) and calling remove(QUORUM) on each row id
> returned (path containing only columnfamily)
> * count number of non-tombstone rows by iterating over the rows
> with get_range_slices(ONE) and testing data. Break if not zero.
> }
> while this is running, call "bin/nodetool -h localhost -p 8081 flush
> KeySpace" in the background every minute or so. When the data hits some
> critical size, the loop will break.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.