[
https://issues.apache.org/jira/browse/CASSANDRA-1040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jonathan Ellis updated CASSANDRA-1040:
--------------------------------------
Comment: was deleted
(was: i'd really rather structure this so we're not copy/pasting so much, but i
don't know enough about nose to say what the best way to do this is. )
> read failure during flush
> -------------------------
>
> Key: CASSANDRA-1040
> URL: https://issues.apache.org/jira/browse/CASSANDRA-1040
> Project: Cassandra
> Issue Type: Bug
> Components: Core
> Affects Versions: 0.7
> Reporter: Jonathan Ellis
> Assignee: Jonathan Ellis
> Priority: Critical
> Fix For: 0.7
>
> Attachments: 1040.txt
>
>
> Joost Ouwerkerk writes:
>
> On a single-node cassandra cluster with basic config (-Xmx:1G)
> loop {
> * insert 5,000 records in a single columnfamily with UUID keys and
> random string values (between 1 and 1000 chars) in 5 different columns
> spanning two different supercolumns
> * delete all the data by iterating over the rows with
> get_range_slices(ONE) and calling remove(QUORUM) on each row id
> returned (path containing only columnfamily)
> * count number of non-tombstone rows by iterating over the rows
> with get_range_slices(ONE) and testing data. Break if not zero.
> }
> while this is running, call "bin/nodetool -h localhost -p 8081 flush
> KeySpace" in the background every minute or so. When the data hits some
> critical size, the loop will break.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.