[
https://issues.apache.org/jira/browse/PHOENIX-3928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16047149#comment-16047149
]
Maddineni Sukumar commented on PHOENIX-3928:
--------------------------------------------
Hi [~jamestaylor] , With above scenario I am able to reproduce
CommitException:Unable to update the following indexes:
[I_T000001],serverTimestamp=1497300602580 .
I added SQLExeption catch block and did force cache update and then tried same
transaction. It still failed with same error.
The reason is table modified timestamp is same before and after delting index,
so when we do cache update we are reusing client side table object as
timestamps are same. Old object has index so we are getting same issue. I
deleted table object from metadata in catch block and did cache update and then
it worked fine.
Is this correct approach i.e. force reloading table from metadata ?
> Consider retrying once after any SQLException
> ---------------------------------------------
>
> Key: PHOENIX-3928
> URL: https://issues.apache.org/jira/browse/PHOENIX-3928
> Project: Phoenix
> Issue Type: Bug
> Reporter: James Taylor
> Assignee: Maddineni Sukumar
> Fix For: 4.12.0
>
>
> There are more cases in which a retry would successfully execute than when a
> MetaDataEntityNotFoundException. For example, certain error cases that depend
> on the state of the metadata would work on retry if the metadata had changed.
> We may want to retry on any SQLException and simply loop through the tables
> involved (plan.getSourceRefs().iterator()), and if any meta data was updated,
> go ahead and retry once.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)