Are 6..8 seconds to read 23.000 small rows - as it should be? I have a quick question on what I think is bad read performance for this simple setup:
<ColumnFamily Name="Dashboard" ColumnType="Super" CompareWith="UTF8Type" CompareSubcolumnsWith="UTF8Type" /> SCF:Dashboard key:username1 -> { SC:uniqStr1 -> { col1:val1, col2: val2, ... col8:val8 }, SC:uniqStr2 -> { col1:val1, col2: val2, ... col8:val8 }, SC:uniqStr3 -> { col1:val1, col2: val2, ... col8:val8 }, SC:uniqStr4 -> { col1:val1, col2: val2, ... col8:val8 }, ... up to 23.000 "rows" key:username2 -> { SC:uniqStr5 -> { col1:val1, col2: val2, ... col8:val8 }, SC:uniqStr6 -> { col1:val1, col2: val2, ... col8:val8 }, SC:uniqStr7 -> { col1:val1, col2: val2, ... col8:val8 }, SC:uniqStr8 -> { col1:val1, col2: val2, ... col8:val8 }, ... A given key "username1" has e.g. 23.000 super column unique (rows). When I try and simply raw-read all these rows, it takes what I think isn't pretty fast - approximately 6-8 seconds. I know, there are a millions things that affect this, but I would just like to have a yes or no if this really can be as it should be? My cassandra is a pretty unchanged v0.6.1. I read using this code: ColumnParent parent = new ColumnParent("Dashboard"); SlicePredicate predicate = new SlicePredicate(); SliceRange sliceRange = new SliceRange(); sliceRange.setCount(Integer.MAX_VALUE); sliceRange.setStart(toRawValue("")); sliceRange.setFinish(toRawValue("")); predicate.setSlice_range(sliceRange); // timing this takes 6-8 secs. return client.get_slice( "keyspace", "theusername", columnParent, slicePredicate, ConsistencyLevel.QUORUM ); My replication factor is 1 and I had two nodes setup in cluster when doing the reads. Shouldn't this be what cassandra can do dead-fast?