[
https://issues.apache.org/jira/browse/PHOENIX-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16879452#comment-16879452
]
Thomas D'Silva commented on PHOENIX-5170:
-----------------------------------------
[~gabry] Are you saying that after the index is dropped and you write to the
data table you still see those exceptions saying its trying to write to the
index table?
Can you please include a test to repro the issue?
> Update meta timestamp of parent table when dropping index
> ---------------------------------------------------------
>
> Key: PHOENIX-5170
> URL: https://issues.apache.org/jira/browse/PHOENIX-5170
> Project: Phoenix
> Issue Type: Bug
> Affects Versions: 4.14.0
> Reporter: gabry
> Priority: Major
> Labels: phoenix
> Fix For: 5.1.0
>
> Attachments: updateParentTableMetaWhenDroppingIndex.patch
>
>
> I have a flume client ,which inserting values to phoenix table with an index
> named idx_abc.
> When the idx_abc dropped , flume logs WARN message for ever as flows
> 28 Feb 2019 10:25:55,774 WARN [hconnection-0x6fb2e162-shared--pool1-t883]
> (org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.logNoResubmit:1263)
> - #1, table=PHOENIX:TABLE_ABC, attempt=1/3 failed=6ops, last exception:
> org.apache.hadoop.hbase.DoNotRetryIOException:
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 1121 (XCL21): Write to
> the index failed. disableIndexOnFailure=true, Failed to write to multiple
> index tables: [PHOENIX:IDX_ABC] ,serverTimestamp=1551320754540,
> at
> org.apache.phoenix.util.ServerUtil.wrapInDoNotRetryIOException(ServerUtil.java:265)
> at
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:163)
> at
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:161)
> at
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:145)
> at
> org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:623)
> at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:583)
> at
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:566)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1034)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
> at
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1030)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3394)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2944)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2886)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2129)
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> Caused by: java.sql.SQLException: ERROR 1121 (XCL21): Write to the index
> failed. disableIndexOnFailure=true, Failed to write to multiple index
> tables: [PHOENIX:IDX_ABC]
> at
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
> at
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:162)
> ... 21 more
> Caused by:
> org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException:
> disableIndexOnFailure=true, Failed to write to multiple index tables:
> [PHOENIX:IDX_ABC]
> at
> org.apache.phoenix.hbase.index.write.TrackingParallelWriterIndexCommitter.write(TrackingParallelWriterIndexCommitter.java:236)
> at
> org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:195)
> at
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:156)
> ... 20 more
> on bigdata.om,60020,1551245714859, tracking started Thu Feb 28 10:25:55 CST
> 2019; not retrying 6 - final failure
> 28 Feb 2019 10:25:55,774 INFO
> [SinkRunner-PollingRunner-DefaultSinkProcessor]
> (org.apache.phoenix.index.PhoenixIndexFailurePolicy.updateIndex:502) -
> Disabling index after hitting max number of index write retries:
> PHOENIX:IDX_ABC
> 28 Feb 2019 10:25:55,776 WARN
> [SinkRunner-PollingRunner-DefaultSinkProcessor]
> (org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleExceptionFromClient:421)
> - Error while trying to handle index write exception
> org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException:
> disableIndexOnFailure=true, Failed to write to multiple index tables:
> [PHOENIX:IDX_ABC]
> at
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.getIndexWriteException(PhoenixIndexFailurePolicy.java:492)
> at
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1161)
> at
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1517)
> at
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1340)
> at
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:670)
> at
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:666)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:666)
> at
> com.bigdata.flume.serializer.phoenix.PhoenixJsonEventSerializer.upsertEvents(PhoenixJsonEventSerlizer.java:262)
> at
> org.apache.phoenix.flume.sink.PhoenixSink.process(PhoenixSink.java:176)
> at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67)
> at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145)
> at java.lang.Thread.run(Thread.java:748)
> This is because when dropping index ,meta timestamp of parent table is not
> updated.
> So ,I create a patch here
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)