[
https://issues.apache.org/jira/browse/HBASE-5322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13440060#comment-13440060
]
Karthik Pandian commented on HBASE-5322:
----------------------------------------
@Ian Varley,
I don't have the entire stack trace now as I have managed to split the data
into multiple tables but I am pretty sure that scanner hangs and may rise a
time out exception for large tables (>20GB). Even I faced the same issue for
Hbase-MR Jobs.
Sample error log for scanner time out in MR job,
java.io.IOException: org.apache.hadoop.hbase.client.ScannerTimeoutException:
63882ms passed since the last invocation, timeout is
Even I tried to increase the timeout as well and I am caching only 300 rows for
each scan.
> RetriesExhaustedException: Trying to contact region server
> ----------------------------------------------------------
>
> Key: HBASE-5322
> URL: https://issues.apache.org/jira/browse/HBASE-5322
> Project: HBase
> Issue Type: Bug
> Affects Versions: 0.90.4
> Reporter: Karthik Pandian
>
> I have a hbase table which holds data for more than 10GB. Now I used the same
> client scanner to scan which fails and reports,
> "Could not seek StoreFileScanner[HFileScanner for reader reader=hdfs".
> This issue occurs only for the table which holds huge data and not for tables
> holding small data.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira