[
https://issues.apache.org/jira/browse/HBASE-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13529730#comment-13529730
]
Lars Hofhansl commented on HBASE-7336:
--------------------------------------
bq. Compactions should go get their own Reader?
That sounds like a save and important improvement.
In other cases it actually seems best to try to get a stream and fall back to
pread if that fails.
Could drive # of reader by he size of the store file, something like a reader
per n GB (n = 1 or 2 maybe). Then we round robin the readers.
Should I commit this for now (assuming it passes HadoopQA and no objections),
and we investigate other options further? Or discuss a bit more to see if we
kind other options?
> HFileBlock.readAtOffset does not work well with multiple threads
> ----------------------------------------------------------------
>
> Key: HBASE-7336
> URL: https://issues.apache.org/jira/browse/HBASE-7336
> Project: HBase
> Issue Type: Bug
> Reporter: Lars Hofhansl
> Assignee: Lars Hofhansl
> Priority: Critical
> Fix For: 0.96.0, 0.94.4
>
> Attachments: 7336-0.94.txt, 7336-0.96.txt
>
>
> HBase grinds to a halt when many threads scan along the same set of blocks
> and neither read short circuit is nor block caching is enabled for the dfs
> client ... disabling the block cache makes sense on very large scans.
> It turns out that synchronizing in istream in HFileBlock.readAtOffset is the
> culprit.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira