[ 
https://issues.apache.org/jira/browse/PHOENIX-6975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17737331#comment-17737331
 ] 

Rushabh Shah commented on PHOENIX-6975:
---------------------------------------

Thinking how to achieve this.

For executeQuery() method, how client will retry on 
StaleMetadataCacheException? 
Application using phoenix will similar query structure to query from Phoenix 
server.

 
{code:java}
String query = "SELECT * FROM " + tableNameStr;
// Execute query
try (ResultSet rs = conn.createStatement().executeQuery(query)) {
  while (rs.next()) {   --> This will throw StaleMetadataCacheException
   // Read from ResultSet
  }
}

{code}
 

In the above example, Phoenix client will receive StaleMetadataCacheException 
while doing rs.next() calls.
How will client handle StaleMetadataCacheException and retry? 
Should phoenix client throw StaleMetadataCacheException all the way back to the 
application or re-create PhoenixStatement and PhoenixResultSet and retry again 
transparently? 
What happens if phoenix client encounters StaleMetadataCacheException on the 
4th or 5th rs.next call? When the client retries, should it skip the previous 3 
or 4 results and read from the 5th result or retry again from the beginning? 

Option 1: Throw StaleMetadataCacheException back to the application.
 # Intercept StaleMetadataCacheException in PhoenixResultSet#next and 
invalidate the client side cache.
 # Throw StaleMetadataCacheException (subclass of SQLException) back to the 
application. Currently 
[PhoenixResultSet#next|https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixResultSet.java#L873]
 is throwing SQLException.
 # Let the application decide whether to retry or not?

Pros:
 # Simple to implement.
 # The application is in control whether to retry or not. If the application is 
going to retry then it can it can update the business logic to take appropriate 
actions on retry, like resetting some counters to avoid double counting, etc.

Cons:
 # Application will encounter SQLException more frequently during the schema 
upgrade process. Currently if application is NOT retrying on any SQLException 
then the failure rate will increase. In this case, there will be some changes 
required to handle StaleMetadataCacheException.


Option 2: Handle the retry logic within phoenix client.
 # Intercept StaleMetadataCacheException in PhoenixResultSet#next and 
invalidate the client side cache.
 # We will need to reset state in PhoenixResultSet and PhoenixStatement, 
particularly the QueryPlan object.
 # Run 
[PhoenixStatement#executeQuery|https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java#L2151]
 to generate QueryPlan again.

Pros:
 # Very transparent to the end application. It doesn’t need to worry about the 
StaleMetadataCacheException.
 # No more pros. Refer to point#1

Cons:
 # Trust issue: What happens if PhoenixResultSet#next failed while iterating 
the results. Lets say application read 4 values and 5th next() called failed 
with StaleMetadataCacheException. If phoenix client retry transparently, it 
will start iterating PhoenixResultSet from the beginning again. This can cause 
trust issues since we will re-process the first 4 rows.
 # The code will become too complex and very difficult to maintain. If we 
introduce new logic in future, we will have to make sure that it gets reset on 
StaleMetadataCacheException. Very cumbersome to maintain.

 

Thoughts, [~stoty]  [~gjacoby] [~kadir]  [~jisaac] [~tkhurana] 

> Introduce StaleMetadataCache Exception
> --------------------------------------
>
>                 Key: PHOENIX-6975
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-6975
>             Project: Phoenix
>          Issue Type: Sub-task
>          Components: core
>            Reporter: Rushabh Shah
>            Assignee: Rushabh Shah
>            Priority: Major
>
> Introduce StaleMetadataCache Exception if client provided last ddl timestamp 
> is less than server side last ddl timestamp and allow the client to retry the 
> statement.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to