[ 
https://issues.apache.org/jira/browse/PHOENIX-5672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17015368#comment-17015368
 ] 

Lars Hofhansl commented on PHOENIX-5672:
----------------------------------------

Btw. I have been disabling this via the setting, and turned off server side 
UPSERT/SELECT, and increased some timeout, and was able to issue 60m row 
upserts on a single server, where previous it would fail around 15m rows.
So again, while it sounds good to do work on the server, it does not scale, and 
it cannot be chunked or paced easily.


> Unable to find cached index metadata with large UPSERT/SELECT and local index.
> ------------------------------------------------------------------------------
>
>                 Key: PHOENIX-5672
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-5672
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.15.0
>            Reporter: Lars Hofhansl
>            Priority: Major
>
> Doing a very large UPSERT/SELECT back into the same table. After a while I 
> get this exception. This happens with server side mutation turned off or on 
> and regardless of the batch-size (which I have increased to 10000 in this 
> last example).
> {code:java}
> 20/01/10 16:41:54 WARN client.AsyncProcess: #1, table=TEST, attempt=1/35 
> failed=10000ops, last exception: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008 
> (INT10): Unable to find cached index metadata.  key=-1180967500149768360 
> region=TEST,\x80\x965g\x80\x0F@\xAA\x80Y$\xEF,1578504217187.42467236e0b49fda05fdaaf69de98832.host=lhofhansl-wsl2,16201,1578702689999
>  Index update failed20/01/10 16:41:54 WARN client.AsyncProcess: #1, 
> table=TEST, attempt=1/35 failed=10000ops, last exception: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008 
> (INT10): Unable to find cached index metadata.  key=-1180967500149768360 
> region=TEST,\x80\x965g\x80\x0F@\xAA\x80Y$\xEF,1578504217187.42467236e0b49fda05fdaaf69de98832.host=lhofhansl-wsl2,16201,1578702689999
>  Index update failed at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:113) at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:87) at 
> org.apache.phoenix.index.PhoenixIndexMetaDataBuilder.getIndexMetaDataCache(PhoenixIndexMetaDataBuilder.java:101)
>  at 
> org.apache.phoenix.index.PhoenixIndexMetaDataBuilder.getIndexMetaData(PhoenixIndexMetaDataBuilder.java:51)
>  at 
> org.apache.phoenix.index.PhoenixIndexBuilder.getIndexMetaData(PhoenixIndexBuilder.java:100)
>  at 
> org.apache.phoenix.index.PhoenixIndexBuilder.getIndexMetaData(PhoenixIndexBuilder.java:73)
>  at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexMetaData(IndexBuildManager.java:84)
>  at 
> org.apache.phoenix.hbase.index.IndexRegionObserver.getPhoenixIndexMetaData(IndexRegionObserver.java:594)
>  at 
> org.apache.phoenix.hbase.index.IndexRegionObserver.preBatchMutateWithExceptions(IndexRegionObserver.java:646)
>  at 
> org.apache.phoenix.hbase.index.IndexRegionObserver.preBatchMutate(IndexRegionObserver.java:334)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:1024)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1742)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1827)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1783)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1020)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3425)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3163) 
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3105) 
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:944)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:872)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2472)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36812)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399) at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291)Caused
>  by: java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index 
> metadata.  key=-1180967500149768360 
> region=TEST,\x80\x965g\x80\x0F@\xAA\x80Y$\xEF,1578504217187.42467236e0b49fda05fdaaf69de98832.host=lhofhansl-wsl2,16201,1578702689999
>  at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:542)
>  at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>  at 
> org.apache.phoenix.index.PhoenixIndexMetaDataBuilder.getIndexMetaDataCache(PhoenixIndexMetaDataBuilder.java:100)
>  ... 23 more on lhofhansl-wsl2,16201,1578702689999, tracking started Fri Jan 
> 10 16:40:21 PST 2020; not retrying 10000 - final failure
>  {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to