[
https://issues.apache.org/jira/browse/HBASE-15831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307669#comment-15307669
]
Neemesh commented on HBASE-15831:
---------------------------------
there are nearly 9 million record, i am using Filter Start and End row filter
to reduce the scan row.
Scanner cache is set as 500
> we are running a Spark Job for scanning Hbase table getting Caused by:
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException
> ----------------------------------------------------------------------------------------------------------------------------------------
>
> Key: HBASE-15831
> URL: https://issues.apache.org/jira/browse/HBASE-15831
> Project: HBase
> Issue Type: Bug
> Reporter: Neemesh
>
> I am getting following error when I am trying to scan hbase table in QED
> environment for a particular collection
> Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException:
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected
> nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id:
> 1629041 number_of_rows: 100 close_scanner: false next_call_seq: 0
> Following is the command to execute the spark job
> spark-submit --master yarn --deploy-mode client --driver-memory 4g --queue
> root.ecpqedv1patents --class com.thomsonreuters.spark.hbase.HbaseSparkFinal
> HbaseSparkVenus.jar ecpqedv1patents:NovusDocCopy w_3rd_bonds
> even I tried running this adding following two parameter also --num-executors
> 200 --executor-cores 4 but even it was throwing same exception.
> I goggled and found if we add following properties we would not be getting
> above issue, but this property changes also did not help
> .set("hbase.client.pause","1000")
> .set("hbase.rpc.timeout","90000")
> .set("hbase.client.retries.number","3")
> .set("zookeeper.recovery.retry","1")
> .set("hbase.client.operation.timeout","30000")
> .set("hbase.client.scanner.timeout.period","90000")
> Please let us know how to resolve this issue
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)