[
https://issues.apache.org/jira/browse/CASSANDRA-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13115451#comment-13115451
]
Jonathan Ellis commented on CASSANDRA-3150:
-------------------------------------------
you haven't done any messing with index_interval by chance?
how much of the difference between 400K and 20M can be explained by new rows
being added?
> ColumnFormatRecordReader loops forever (StorageService.getSplits(..) out of
> whack)
> ----------------------------------------------------------------------------------
>
> Key: CASSANDRA-3150
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3150
> Project: Cassandra
> Issue Type: Bug
> Components: Hadoop
> Affects Versions: 0.8.4, 0.8.5
> Reporter: Mck SembWever
> Assignee: Mck SembWever
> Priority: Critical
> Fix For: 0.8.6
>
> Attachments: CASSANDRA-3150.patch, Screenshot-Counters for
> task_201109212019_1060_m_000029 - Mozilla Firefox.png, Screenshot-Hadoop map
> task list for job_201109212019_1060 on cassandra01 - Mozilla Firefox.png,
> attempt_201109071357_0044_m_003040_0.grep-get_range_slices.log,
> fullscan-example1.log
>
>
> From http://thread.gmane.org/gmane.comp.db.cassandra.user/20039
> {quote}
> bq. Cassandra-0.8.4 w/ ByteOrderedPartitioner
> bq. CFIF's inputSplitSize=196608
> bq. 3 map tasks (from 4013) is still running after read 25 million rows.
> bq. Can this be a bug in StorageService.getSplits(..) ?
> getSplits looks pretty foolproof to me but I guess we'd need to add
> more debug logging to rule out a bug there for sure.
> I guess the main alternative would be a bug in the recordreader paging.
> {quote}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira