[
https://issues.apache.org/jira/browse/PHOENIX-5672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17013600#comment-17013600
]
Lars Hofhansl commented on PHOENIX-5672:
----------------------------------------
There's also phoenix.index.mutableBatchSizeThreshold which defaults to 3.
And IndexMetaDataCacheClient has this.
{code:java}
/**
* Determines whether or not to use the IndexMetaDataCache to send the
index metadata
* to the region servers. The alternative is to just set the index metadata
as an attribute on
* the mutations.
* @param connection
* @param mutations the list of mutations that will be sent in a batch to
server
* @param indexMetaDataByteLength length in bytes of the index metadata
cache
*/
public static boolean useIndexMetadataCache(PhoenixConnection connection,
List<? extends Mutation> mutations, int indexMetaDataByteLength) {
ReadOnlyProps props = connection.getQueryServices().getProps();
int threshold = props.getInt(INDEX_MUTATE_BATCH_SIZE_THRESHOLD_ATTRIB,
QueryServicesOptions.DEFAULT_INDEX_MUTATE_BATCH_SIZE_THRESHOLD);
return (indexMetaDataByteLength > ServerCacheClient.UUID_LENGTH &&
mutations.size() > threshold);
}
{code}
So for any batch of mutations that is > 3 we'll build a index cache in *every*
involved region server ahead of time, in an extra RPC each, just to avoid
sending a few extra bytes with each *batch* of mutations.
Perhaps I am missing something... But really, W.T.F.?
Frankly... This just looks like another misguided attempt to optimize Phoenix
without thinking through the consequences, albeit with good intentions. IMHO -
just like the server side deletes and upsert/selects - this code should just be
removed.
> Unable to find cached index metadata with large UPSERT/SELECT and local index.
> ------------------------------------------------------------------------------
>
> Key: PHOENIX-5672
> URL: https://issues.apache.org/jira/browse/PHOENIX-5672
> Project: Phoenix
> Issue Type: Bug
> Affects Versions: 4.15.0
> Reporter: Lars Hofhansl
> Priority: Major
>
> Doing a very large UPSERT/SELECT back into the same table. After a while I
> get this exception. This happens with server side mutation turned off or on
> and regardless of the batch-size (which I have increased to 10000 in this
> last example).
> {code:java}
> 20/01/10 16:41:54 WARN client.AsyncProcess: #1, table=TEST, attempt=1/35
> failed=10000ops, last exception:
> org.apache.hadoop.hbase.DoNotRetryIOException:
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008
> (INT10): Unable to find cached index metadata. key=-1180967500149768360
> region=TEST,\x80\x965g\x80\x0F@\xAA\x80Y$\xEF,1578504217187.42467236e0b49fda05fdaaf69de98832.host=lhofhansl-wsl2,16201,1578702689999
> Index update failed20/01/10 16:41:54 WARN client.AsyncProcess: #1,
> table=TEST, attempt=1/35 failed=10000ops, last exception:
> org.apache.hadoop.hbase.DoNotRetryIOException:
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008
> (INT10): Unable to find cached index metadata. key=-1180967500149768360
> region=TEST,\x80\x965g\x80\x0F@\xAA\x80Y$\xEF,1578504217187.42467236e0b49fda05fdaaf69de98832.host=lhofhansl-wsl2,16201,1578702689999
> Index update failed at
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:113) at
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:87) at
> org.apache.phoenix.index.PhoenixIndexMetaDataBuilder.getIndexMetaDataCache(PhoenixIndexMetaDataBuilder.java:101)
> at
> org.apache.phoenix.index.PhoenixIndexMetaDataBuilder.getIndexMetaData(PhoenixIndexMetaDataBuilder.java:51)
> at
> org.apache.phoenix.index.PhoenixIndexBuilder.getIndexMetaData(PhoenixIndexBuilder.java:100)
> at
> org.apache.phoenix.index.PhoenixIndexBuilder.getIndexMetaData(PhoenixIndexBuilder.java:73)
> at
> org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexMetaData(IndexBuildManager.java:84)
> at
> org.apache.phoenix.hbase.index.IndexRegionObserver.getPhoenixIndexMetaData(IndexRegionObserver.java:594)
> at
> org.apache.phoenix.hbase.index.IndexRegionObserver.preBatchMutateWithExceptions(IndexRegionObserver.java:646)
> at
> org.apache.phoenix.hbase.index.IndexRegionObserver.preBatchMutate(IndexRegionObserver.java:334)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:1024)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1742)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1827)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1783)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1020)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3425)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3163)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3105)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:944)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:872)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2472)
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36812)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399) at
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311) at
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291)Caused
> by: java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index
> metadata. key=-1180967500149768360
> region=TEST,\x80\x965g\x80\x0F@\xAA\x80Y$\xEF,1578504217187.42467236e0b49fda05fdaaf69de98832.host=lhofhansl-wsl2,16201,1578702689999
> at
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:542)
> at
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at
> org.apache.phoenix.index.PhoenixIndexMetaDataBuilder.getIndexMetaDataCache(PhoenixIndexMetaDataBuilder.java:100)
> ... 23 more on lhofhansl-wsl2,16201,1578702689999, tracking started Fri Jan
> 10 16:40:21 PST 2020; not retrying 10000 - final failure
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)