[
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chao Wang updated PHOENIX-5860:
-------------------------------
Description:
Currently delete data is UngroupedAggregateRegionObserver class on server
side, this class check if isRegionClosingOrSplitting is true. when
isRegionClosingOrSplitting is true, will throw new IOException("Temporarily
unable to write from scan because region is closing or splitting").
when region online , which initialize phoenix CP that
isRegionClosingOrSplitting is false.before region split, region change
isRegionClosingOrSplitting to true.but if region split failed,split will roll
back where not change isRegionClosingOrSplitting to false. after that all
write opration will always throw exception which is Temporarily unable to
write from scan because region is closing or splitting。
so we should change isRegionClosingOrSplitting to false when region
preRollBackSplit in UngroupedAggregateRegionObserver class。
A simple test where a data table split failed, then roll back success.but
delete data always throw exception.
# create data table
# bulkload data for this table
# alter hbase-server code, which region split will throw exception , then
rollback.
# use hbase shell , split region
# view regionserver log, where region split failed, and then rollback success.
# user phoenix sqlline.py for delete data, which will throw exption
Caused by: java.io.IOException: Temporarily unable to write from scan because
region is closing or splitting Caused by: java.io.IOException: Temporarily
unable to write from scan because region is closing or splitting at
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
at
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
at
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
at
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
at
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
... 5 more
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
at
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
at
org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
at
org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
at
org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
at
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
at
org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
at
org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) at
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) at
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) at
org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
at
org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
at
com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
at
com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
at scala.collection.Iterator$class.foreach(Iterator.scala:893) at
org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at
com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
at
com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
at
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
at
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at
org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at
org.apache.spark.scheduler.Task.run(Task.scala:99) at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
was:
Currently delete data is UngroupedAggregateRegionObserver class on server
side, this class check if isRegionClosingOrSplitting is true. when
isRegionClosingOrSplitting is true, will throw new IOException("Temporarily
unable to write from scan because region is closing or splitting").
when region online , which initialize phoenix CP that
isRegionClosingOrSplitting is false.before region split, region change
isRegionClosingOrSplitting to true.but if region split failed,split will roll
back where not change isRegionClosingOrSplitting to false. after that all
write opration will always throw exception which is Temporarily unable to
write from scan because region is closing or splitting。
so we should change isRegionClosingOrSplitting to false when region
preRollBackSplit in UngroupedAggregateRegionObserver class。
A simple test where a data table split failed, then roll back success.but
delete data always throw exception.
# create data table
# bulkload data for this table
# alter hbase-server code, which region split will throw exception , then
rollback.
# use hbase shell , split region
# view regionserver log, where region split failed, and then rollback success.
# user phoenix sqlline.py for delete data, which will throw exption
> Throw exception which region is closing or splitting when delete data
> ---------------------------------------------------------------------
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
> Issue Type: Bug
> Components: core
> Affects Versions: 4.13.1, 4.15.0, 4.14.1, 4.14.2, 4.14.3
> Reporter: Chao Wang
> Assignee: Chao Wang
> Priority: Blocker
> Attachments: PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
> Time Spent: 20m
> Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class on server
> side, this class check if isRegionClosingOrSplitting is true. when
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily
> unable to write from scan because region is closing or splitting").
> when region online , which initialize phoenix CP that
> isRegionClosingOrSplitting is false.before region split, region change
> isRegionClosingOrSplitting to true.but if region split failed,split will roll
> back where not change isRegionClosingOrSplitting to false. after that all
> write opration will always throw exception which is Temporarily unable to
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting to false when region
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but
> delete data always throw exception.
> # create data table
> # bulkload data for this table
> # alter hbase-server code, which region split will throw exception , then
> rollback.
> # use hbase shell , split region
> # view regionserver log, where region split failed, and then rollback
> success.
> # user phoenix sqlline.py for delete data, which will throw exption
> Caused by: java.io.IOException: Temporarily unable to write from scan
> because region is closing or splitting Caused by: java.io.IOException:
> Temporarily unable to write from scan because region is closing or splitting
> at
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
> at
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
> at
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
> ... 5 more
> at
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
> at
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
> at
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
> at
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
> at
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
> at
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
> at
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
> at
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
> at
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
> at
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
> at
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
> at scala.collection.Iterator$class.foreach(Iterator.scala:893) at
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
> at
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
> at
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
> at
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
> at
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at
> org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at
> org.apache.spark.scheduler.Task.run(Task.scala:99) at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325) at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)