[ 
https://issues.apache.org/jira/browse/PHOENIX-2599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15102237#comment-15102237
 ] 

James Taylor commented on PHOENIX-2599:
---------------------------------------

bq. Is the region split exception only indicative of an issue with total 
ordering during a load? e.g., will we also miss data, possible data corruption 
issues, etc.?
The StaleRegionBoundaryCacheException is solely for the purpose of telling the 
client that the cache of the region boundaries that the client has is out of 
date. There's no data corruption and you won't miss data. If you're relying on 
the rows you get back to be ordered (which you're not, but Phoenix JDBC does), 
then the client needs to deal with this exception.
bq. Any idea what the side effects would be on a subsequent load with out of 
sync region boundaries? Basically the scenario where we ignore the exception, 
but just keep going with an expired cache.
The parallelization of work won't be as good, so some threads may be doing more 
work than others. The longer the out-of-date region boundary cache is, the 
worse this gets.
bq. Can you think of any issues with forcing a cache clear on every 
PhoenixRecordReader.initialize()?
No, this might be simplest (in addition to adding the capability to ignore the 
exception as mentioned before). It would mean an extra RPC. You could do that 
like this:
{code}
    @Override
    public void initialize(InputSplit split, TaskAttemptContext context) throws 
IOException, InterruptedException {
        final PhoenixInputSplit pSplit = (PhoenixInputSplit)split;
        final List<Scan> scans = pSplit.getScans();
        try {
            List<PeekingResultIterator> iterators = 
Lists.newArrayListWithExpectedSize(scans.size());
            StatementContext ctx = queryPlan.getContext();
            ReadMetricQueue readMetrics = ctx.getReadMetricsQueue();
            String tableName = 
queryPlan.getTableRef().getTable().getPhysicalName().getString();
            byte[] tableNameBytes = 
queryPlan.getTableRef().getTable().getPhysicalName().getBytes();
            ConnectionQueryServices services = 
queryPlan.getContext().getConnection().getQueryServices();
            services.clearTableRegionCache(tableNameBytes);
            ...
{code}

> PhoenixRecordReader does not handle StaleRegionBoundaryCacheException
> ---------------------------------------------------------------------
>
>                 Key: PHOENIX-2599
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2599
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.5.1
>         Environment: HBase 1.0 + Linux
>            Reporter: Li Gao
>            Assignee: Josh Mahonin
>
> When running Spark 1.4.1 and Phoenix 4.5.1 via Phoenix-Spark connector. We 
> notice sometimes (30~50%) time the following error would appear and kill the 
> running spark job:
> 16/01/14 19:40:16 ERROR yarn.ApplicationMaster: User class threw exception: 
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 5 in 
> stage 110.0 failed 4 times, most recent failure: Lost task 5.3 in stage 110.0 
> (TID 35526, datanode-123.somewhere): java.lang.RuntimeException: 
> org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 
> (XCL08): Cache of region boundaries are out of date.
> at com.google.common.base.Throwables.propagate(Throwables.java:156)
> at 
> org.apache.phoenix.mapreduce.PhoenixRecordReader.initialize(PhoenixRecordReader.java:126)
> at org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(NewHadoopRDD.scala:133)
> at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:104)
> at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:66)
> at org.apache.phoenix.spark.PhoenixRDD.compute(PhoenixRDD.scala:52)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
> at org.apache.spark.scheduler.Task.run(Task.scala:70)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 
> 1108 (XCL08): Cache of region boundaries are out of date.
> at 
> org.apache.phoenix.exception.SQLExceptionCode$13.newException(SQLExceptionCode.java:304)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
> at 
> org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:131)
> at 
> org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:115)
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:104)
> at 
> org.apache.phoenix.iterate.TableResultIterator.getDelegate(TableResultIterator.java:70)
> at 
> org.apache.phoenix.iterate.TableResultIterator.<init>(TableResultIterator.java:88)
> at 
> org.apache.phoenix.iterate.TableResultIterator.<init>(TableResultIterator.java:79)
> at 
> org.apache.phoenix.mapreduce.PhoenixRecordReader.initialize(PhoenixRecordReader.java:111)
> ... 18 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to