[ 
https://issues.apache.org/jira/browse/CASSANDRA-837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12839723#action_12839723
 ] 

Jonathan Ellis commented on CASSANDRA-837:
------------------------------------------

In that case we should probably reduce the default to 8k, but we're testing 
10k-20k rows read per second here via get_range_slice.  how big are your rows, 
and are you running on a VM or real hardware?

> hadoop recordreader hardcodes row count
> ---------------------------------------
>
>                 Key: CASSANDRA-837
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-837
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.6
>            Reporter: Jonathan Ellis
>            Priority: Minor
>             Fix For: 0.6
>
>
> We need to use the split size instead.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to