[
https://issues.apache.org/jira/browse/PHOENIX-7025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17773722#comment-17773722
]
ASF GitHub Bot commented on PHOENIX-7025:
-----------------------------------------
shahrs87 commented on code in PR #1666:
URL: https://github.com/apache/phoenix/pull/1666#discussion_r1351287070
##########
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/PhoenixRegionServerEndpoint.java:
##########
@@ -24,6 +24,7 @@
import java.io.IOException;
import java.util.Collections;
+import com.google.protobuf.Service;
Review Comment:
nit: extra import.
##########
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java:
##########
@@ -317,16 +337,110 @@ protected QueryPlan optimizeQuery(CompilableStatement
stmt) throws SQLException
protected PhoenixResultSet executeQuery(final CompilableStatement stmt,
final QueryLogger queryLogger)
throws SQLException {
- return executeQuery(stmt, true, queryLogger, false);
+ return executeQuery(stmt, true, queryLogger, false,
this.validateLastDdlTimestamp);
}
protected PhoenixResultSet executeQuery(final CompilableStatement stmt,
final QueryLogger queryLogger, boolean noCommit)
throws SQLException {
- return executeQuery(stmt, true, queryLogger, noCommit);
+ return executeQuery(stmt, true, queryLogger, noCommit,
this.validateLastDdlTimestamp);
+ }
+
+ private String getInfoString(TableRef tableRef) {
+ return String.format("Tenant: %s, Schema: %s, Table: %s",
+ this.connection.getTenantId(),
+ tableRef.getTable().getSchemaName(),
+ tableRef.getTable().getTableName());
+ }
+
+ private void setLastDDLTimestampRequestParameters(
+ RegionServerEndpointProtos.LastDDLTimestampRequest.Builder
builder, PTable pTable) {
+ byte[] tenantIDBytes = this.connection.getTenantId() == null
+ ? HConstants.EMPTY_BYTE_ARRAY
+ : this.connection.getTenantId().getBytes();
+ byte[] schemaBytes = pTable.getSchemaName() == null
+ ? HConstants.EMPTY_BYTE_ARRAY
+ : pTable.getSchemaName().getBytes();
+ builder.setTenantId(ByteStringer.wrap(tenantIDBytes));
+ builder.setSchemaName(ByteStringer.wrap(schemaBytes));
+
builder.setTableName(ByteStringer.wrap(pTable.getTableName().getBytes()));
+ builder.setLastDDLTimestamp(pTable.getLastDDLTimestamp());
+ }
+ /**
+ * Build a request for the validateLastDDLTimestamp RPC.
+ * @param tableRef
+ * @return ValidateLastDDLTimestampRequest for the table in tableRef
+ */
+ private RegionServerEndpointProtos.ValidateLastDDLTimestampRequest
getValidateDDLTimestampRequest(TableRef tableRef) throws TableNotFoundException
{
+ RegionServerEndpointProtos.ValidateLastDDLTimestampRequest.Builder
requestBuilder
+ =
RegionServerEndpointProtos.ValidateLastDDLTimestampRequest.newBuilder();
+ RegionServerEndpointProtos.LastDDLTimestampRequest.Builder innerBuilder
+ =
RegionServerEndpointProtos.LastDDLTimestampRequest.newBuilder();
+
Review Comment:
nit: extra lines.
##########
phoenix-core/src/test/java/org/apache/phoenix/cache/ServerMetadataCacheTest.java:
##########
@@ -54,11 +65,17 @@
@Category(ParallelStatsDisabledIT.class)
public class ServerMetadataCacheTest extends ParallelStatsDisabledIT {
+
+ private final Random RANDOM = new Random(42);
+ private final long NEVER = (long)
ConnectionProperty.UPDATE_CACHE_FREQUENCY.getValue("NEVER");
private static final Logger LOGGER =
LoggerFactory.getLogger(ServerMetadataCacheTest.class);
@BeforeClass
public static synchronized void doSetup() throws Exception {
Map<String, String> props = Maps.newHashMapWithExpectedSize(1);
+ props.put(REGIONSERVER_COPROCESSOR_CONF_KEY,
Review Comment:
We are already adding the coproc in BaseTest
[here](https://github.com/apache/phoenix/blob/PHOENIX-6883-feature/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java#L657)
##########
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java:
##########
@@ -371,6 +486,14 @@ private PhoenixResultSet executeQuery(final
CompilableStatement stmt,
plan =
connection.getQueryServices().getOptimizer()
.optimize(PhoenixStatement.this, plan);
+ setLastQueryPlan(plan);
+
+ //verify metadata for the table/view/index in
the query plan
+ //plan.getTableRef can be null in some cases
like EXPLAIN <query>
+ if (validateLastDdlTimestamp &&
plan.getTableRef() != null) {
Review Comment:
Can we change either the name of the variable or the method name? It is very
confusing. Lets change the variable name from validateLastDdlTimestamp to
shouldValidateLastDdlTimestamp or if you can come up with something better.
##########
phoenix-core/src/test/java/org/apache/phoenix/cache/ServerMetadataCacheTest.java:
##########
@@ -419,16 +425,296 @@ public void
testInvalidateCacheForBaseTableWithUpdateIndexStatement() throws Exc
}
}
+ /**
+ * Client-1 creates a table, upserts data and alters the table.
+ * Client-2 queries the table before and after the alter.
+ * Check queries work successfully in both cases and verify number of
addTable invocations.
+ */
+ @Test
+ public void testSelectQueryWithOldDDLTimestamp() throws SQLException {
+ Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+ String url1 = QueryUtil.getConnectionUrl(props, config, "client1");
+ String url2 = QueryUtil.getConnectionUrl(props, config, "client2");
+ String tableName = generateUniqueName();
+ ConnectionQueryServices spyCqs1 =
Mockito.spy(driver.getConnectionQueryServices(url1, props));
+ ConnectionQueryServices spyCqs2 =
Mockito.spy(driver.getConnectionQueryServices(url2, props));
+ int expectedNumCacheUpdates;
+
+ try (Connection conn1 = spyCqs1.connect(url1, props);
+ Connection conn2 = spyCqs2.connect(url2, props)) {
+
+ // create table with UCF=never and upsert data using client-1
+ createTable(conn1, tableName, NEVER);
+ upsert(conn1, tableName);
+
+ // select query from client-2 works to populate client side
metadata cache
+ // there should be 1 update to the client cache
+ query(conn2, tableName);
+ expectedNumCacheUpdates = 1;
+ Mockito.verify(spyCqs2, Mockito.times(expectedNumCacheUpdates))
+ .addTable(any(PTable.class), anyLong());
+
+ // add column using client-1 to update last ddl timestamp
+ alterTableAddColumn(conn1, tableName, "newCol1");
+
+ // reset the spy CQSI object
+ Mockito.reset(spyCqs2);
+
+ // select query from client-2 with old ddl timestamp works
+ // there should be one update to the client cache
+ query(conn2, tableName);
+ expectedNumCacheUpdates = 1;
+ Mockito.verify(spyCqs2, Mockito.times(expectedNumCacheUpdates))
+ .addTable(any(PTable.class), anyLong());
+
+ // select query from client-2 with latest ddl timestamp works
+ // there should be no more updates to client cache
+ query(conn2, tableName);
+ Mockito.verify(spyCqs2, Mockito.times(expectedNumCacheUpdates))
+ .addTable(any(PTable.class), anyLong());
+ }
+ }
+
+ /**
+ * Test DDL timestamp validation retry logic in case of any exception
+ * from Server other than StaleMetadataCacheException.
+ */
+ @Test
+ public void testSelectQueryServerSideExceptionInValidation() throws
Exception {
+ Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+ String url1 = QueryUtil.getConnectionUrl(props, config, "client1");
+ String url2 = QueryUtil.getConnectionUrl(props, config, "client2");
+ String tableName = generateUniqueName();
+ ConnectionQueryServices spyCqs1 =
Mockito.spy(driver.getConnectionQueryServices(url1, props));
+ ConnectionQueryServices spyCqs2 =
Mockito.spy(driver.getConnectionQueryServices(url2, props));
+ ServerMetadataCache cache = null;
+
+ try (Connection conn1 = spyCqs1.connect(url1, props);
+ Connection conn2 = spyCqs2.connect(url2, props)) {
+
+ // create table and upsert using client-1
+ createTable(conn1, tableName, NEVER);
+ upsert(conn1, tableName);
+
+ // Instrument ServerMetadataCache to throw a SQLException once
+ cache = ServerMetadataCache.getInstance(config);
+ ServerMetadataCache spyCache = Mockito.spy(cache);
+ Mockito.doThrow(new
SQLException("FAIL")).doCallRealMethod().when(spyCache)
+ .getLastDDLTimestampForTable(any(), any(),
eq(Bytes.toBytes(tableName)));
+ ServerMetadataCache.setInstance(spyCache);
+
+ // query using client-2 should succeed
+ query(conn2, tableName);
+
+ // verify live region servers were refreshed
+ Mockito.verify(spyCqs2,
Mockito.times(1)).refreshLiveRegionServers();
+ }
+ }
+
+ /**
+ * Test Select query with old ddl timestamp and ddl timestamp validation
encounters an exception.
+ */
+ @Test
+ public void testSelectQueryWithOldDDLTimestampWithExceptionRetry() throws
Exception {
+ Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+ String url1 = QueryUtil.getConnectionUrl(props, config, "client1");
+ String url2 = QueryUtil.getConnectionUrl(props, config, "client2");
+ String tableName = generateUniqueName();
+ ConnectionQueryServices spyCqs1 =
Mockito.spy(driver.getConnectionQueryServices(url1, props));
+ ConnectionQueryServices spyCqs2 =
Mockito.spy(driver.getConnectionQueryServices(url2, props));
+ int expectedNumCacheUpdates;
+ ServerMetadataCache cache = null;
+
+ try (Connection conn1 = spyCqs1.connect(url1, props);
+ Connection conn2 = spyCqs2.connect(url2, props)) {
+
+ // create table and upsert using client-1
+ createTable(conn1, tableName, NEVER);
+ upsert(conn1, tableName);
+
+ // query using client-2 to populate cache
+ query(conn2, tableName);
+ expectedNumCacheUpdates = 1;
+ Mockito.verify(spyCqs2, Mockito.times(expectedNumCacheUpdates))
+ .addTable(any(PTable.class), anyLong());
+
+ // add column using client-1 to update last ddl timestamp
+ alterTableAddColumn(conn1, tableName, "newCol1");
+
+ // reset the spy CQSI object
+ Mockito.reset(spyCqs2);
+
+ // Instrument ServerMetadataCache to throw a SQLException once
+ cache = ServerMetadataCache.getInstance(config);
+ ServerMetadataCache spyCache = Mockito.spy(cache);
+ Mockito.doThrow(new
SQLException("FAIL")).doCallRealMethod().when(spyCache)
Review Comment:
Here we are testing 2 things.
1. ServerMetadataCache throwing SQL Exception first time and succeeding 2nd
time.
2. In the second attempt, ServerMetadataCache is throwing
StaleMetadaCacheException since it is using old PTable object and the retry
within PhoenixStatement#executeQuery will kick in and refresh the client
metadata cache.
Is that correct? If yes, then please update the javadoc
##########
phoenix-core/src/test/java/org/apache/phoenix/cache/ServerMetadataCacheTest.java:
##########
@@ -419,16 +425,296 @@ public void
testInvalidateCacheForBaseTableWithUpdateIndexStatement() throws Exc
}
}
+ /**
+ * Client-1 creates a table, upserts data and alters the table.
+ * Client-2 queries the table before and after the alter.
+ * Check queries work successfully in both cases and verify number of
addTable invocations.
+ */
+ @Test
+ public void testSelectQueryWithOldDDLTimestamp() throws SQLException {
+ Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+ String url1 = QueryUtil.getConnectionUrl(props, config, "client1");
+ String url2 = QueryUtil.getConnectionUrl(props, config, "client2");
+ String tableName = generateUniqueName();
+ ConnectionQueryServices spyCqs1 =
Mockito.spy(driver.getConnectionQueryServices(url1, props));
+ ConnectionQueryServices spyCqs2 =
Mockito.spy(driver.getConnectionQueryServices(url2, props));
+ int expectedNumCacheUpdates;
+
+ try (Connection conn1 = spyCqs1.connect(url1, props);
+ Connection conn2 = spyCqs2.connect(url2, props)) {
+
+ // create table with UCF=never and upsert data using client-1
+ createTable(conn1, tableName, NEVER);
+ upsert(conn1, tableName);
+
+ // select query from client-2 works to populate client side
metadata cache
+ // there should be 1 update to the client cache
+ query(conn2, tableName);
+ expectedNumCacheUpdates = 1;
+ Mockito.verify(spyCqs2, Mockito.times(expectedNumCacheUpdates))
+ .addTable(any(PTable.class), anyLong());
+
+ // add column using client-1 to update last ddl timestamp
+ alterTableAddColumn(conn1, tableName, "newCol1");
+
+ // reset the spy CQSI object
+ Mockito.reset(spyCqs2);
+
+ // select query from client-2 with old ddl timestamp works
+ // there should be one update to the client cache
+ query(conn2, tableName);
+ expectedNumCacheUpdates = 1;
+ Mockito.verify(spyCqs2, Mockito.times(expectedNumCacheUpdates))
+ .addTable(any(PTable.class), anyLong());
+
+ // select query from client-2 with latest ddl timestamp works
+ // there should be no more updates to client cache
+ query(conn2, tableName);
+ Mockito.verify(spyCqs2, Mockito.times(expectedNumCacheUpdates))
+ .addTable(any(PTable.class), anyLong());
+ }
+ }
+
+ /**
+ * Test DDL timestamp validation retry logic in case of any exception
+ * from Server other than StaleMetadataCacheException.
+ */
+ @Test
+ public void testSelectQueryServerSideExceptionInValidation() throws
Exception {
+ Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+ String url1 = QueryUtil.getConnectionUrl(props, config, "client1");
+ String url2 = QueryUtil.getConnectionUrl(props, config, "client2");
+ String tableName = generateUniqueName();
+ ConnectionQueryServices spyCqs1 =
Mockito.spy(driver.getConnectionQueryServices(url1, props));
+ ConnectionQueryServices spyCqs2 =
Mockito.spy(driver.getConnectionQueryServices(url2, props));
+ ServerMetadataCache cache = null;
+
+ try (Connection conn1 = spyCqs1.connect(url1, props);
+ Connection conn2 = spyCqs2.connect(url2, props)) {
+
+ // create table and upsert using client-1
+ createTable(conn1, tableName, NEVER);
+ upsert(conn1, tableName);
+
+ // Instrument ServerMetadataCache to throw a SQLException once
+ cache = ServerMetadataCache.getInstance(config);
+ ServerMetadataCache spyCache = Mockito.spy(cache);
+ Mockito.doThrow(new
SQLException("FAIL")).doCallRealMethod().when(spyCache)
+ .getLastDDLTimestampForTable(any(), any(),
eq(Bytes.toBytes(tableName)));
+ ServerMetadataCache.setInstance(spyCache);
+
+ // query using client-2 should succeed
+ query(conn2, tableName);
+
+ // verify live region servers were refreshed
+ Mockito.verify(spyCqs2,
Mockito.times(1)).refreshLiveRegionServers();
+ }
+ }
+
+ /**
+ * Test Select query with old ddl timestamp and ddl timestamp validation
encounters an exception.
+ */
+ @Test
+ public void testSelectQueryWithOldDDLTimestampWithExceptionRetry() throws
Exception {
+ Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+ String url1 = QueryUtil.getConnectionUrl(props, config, "client1");
+ String url2 = QueryUtil.getConnectionUrl(props, config, "client2");
+ String tableName = generateUniqueName();
+ ConnectionQueryServices spyCqs1 =
Mockito.spy(driver.getConnectionQueryServices(url1, props));
+ ConnectionQueryServices spyCqs2 =
Mockito.spy(driver.getConnectionQueryServices(url2, props));
+ int expectedNumCacheUpdates;
+ ServerMetadataCache cache = null;
+
+ try (Connection conn1 = spyCqs1.connect(url1, props);
+ Connection conn2 = spyCqs2.connect(url2, props)) {
+
+ // create table and upsert using client-1
+ createTable(conn1, tableName, NEVER);
+ upsert(conn1, tableName);
+
+ // query using client-2 to populate cache
+ query(conn2, tableName);
+ expectedNumCacheUpdates = 1;
+ Mockito.verify(spyCqs2, Mockito.times(expectedNumCacheUpdates))
+ .addTable(any(PTable.class), anyLong());
+
+ // add column using client-1 to update last ddl timestamp
+ alterTableAddColumn(conn1, tableName, "newCol1");
+
+ // reset the spy CQSI object
+ Mockito.reset(spyCqs2);
+
+ // Instrument ServerMetadataCache to throw a SQLException once
+ cache = ServerMetadataCache.getInstance(config);
+ ServerMetadataCache spyCache = Mockito.spy(cache);
+ Mockito.doThrow(new
SQLException("FAIL")).doCallRealMethod().when(spyCache)
+ .getLastDDLTimestampForTable(any(), any(),
eq(Bytes.toBytes(tableName)));
+ ServerMetadataCache.setInstance(spyCache);
+
+ // query using client-2 should succeed, one cache update
+ query(conn2, tableName);
+ expectedNumCacheUpdates = 1;
+ Mockito.verify(spyCqs2, Mockito.times(expectedNumCacheUpdates))
+ .addTable(any(PTable.class), anyLong());
+
+ // verify live region servers were refreshed
+ Mockito.verify(spyCqs2,
Mockito.times(1)).refreshLiveRegionServers();
+ }
+ }
+
+ /**
+ * Test Select Query fails in case DDL timestamp validation throws
SQLException twice.
+ */
+ @Test
+ public void testSelectQueryFails() throws Exception {
+ Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+ String url1 = QueryUtil.getConnectionUrl(props, config, "client1");
+ String url2 = QueryUtil.getConnectionUrl(props, config, "client2");
+ String tableName = generateUniqueName();
+ ConnectionQueryServices spyCqs1 =
Mockito.spy(driver.getConnectionQueryServices(url1, props));
+ ConnectionQueryServices spyCqs2 =
Mockito.spy(driver.getConnectionQueryServices(url2, props));
+ ServerMetadataCache cache = null;
+
+ try (Connection conn1 = spyCqs1.connect(url1, props);
+ Connection conn2 = spyCqs2.connect(url2, props)) {
+
+ // create table and upsert using client-1
+ createTable(conn1, tableName, NEVER);
+ upsert(conn1, tableName);
+
+ // Instrument ServerMetadataCache to throw a SQLException twice
+ cache = ServerMetadataCache.getInstance(config);
+ ServerMetadataCache spyCache = Mockito.spy(cache);
+ SQLException e = new SQLException("FAIL");
+ Mockito.doThrow(e).doThrow(e).when(spyCache)
Review Comment:
Do we need `doThrow` twice? If you just declare it once, it will throw the
exception always.
##########
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java:
##########
@@ -317,16 +337,110 @@ protected QueryPlan optimizeQuery(CompilableStatement
stmt) throws SQLException
protected PhoenixResultSet executeQuery(final CompilableStatement stmt,
final QueryLogger queryLogger)
throws SQLException {
- return executeQuery(stmt, true, queryLogger, false);
+ return executeQuery(stmt, true, queryLogger, false,
this.validateLastDdlTimestamp);
}
protected PhoenixResultSet executeQuery(final CompilableStatement stmt,
final QueryLogger queryLogger, boolean noCommit)
throws SQLException {
- return executeQuery(stmt, true, queryLogger, noCommit);
+ return executeQuery(stmt, true, queryLogger, noCommit,
this.validateLastDdlTimestamp);
+ }
+
+ private String getInfoString(TableRef tableRef) {
+ return String.format("Tenant: %s, Schema: %s, Table: %s",
+ this.connection.getTenantId(),
+ tableRef.getTable().getSchemaName(),
+ tableRef.getTable().getTableName());
+ }
+
+ private void setLastDDLTimestampRequestParameters(
+ RegionServerEndpointProtos.LastDDLTimestampRequest.Builder
builder, PTable pTable) {
+ byte[] tenantIDBytes = this.connection.getTenantId() == null
+ ? HConstants.EMPTY_BYTE_ARRAY
+ : this.connection.getTenantId().getBytes();
+ byte[] schemaBytes = pTable.getSchemaName() == null
+ ? HConstants.EMPTY_BYTE_ARRAY
+ : pTable.getSchemaName().getBytes();
+ builder.setTenantId(ByteStringer.wrap(tenantIDBytes));
+ builder.setSchemaName(ByteStringer.wrap(schemaBytes));
+
builder.setTableName(ByteStringer.wrap(pTable.getTableName().getBytes()));
+ builder.setLastDDLTimestamp(pTable.getLastDDLTimestamp());
+ }
+ /**
+ * Build a request for the validateLastDDLTimestamp RPC.
+ * @param tableRef
+ * @return ValidateLastDDLTimestampRequest for the table in tableRef
+ */
+ private RegionServerEndpointProtos.ValidateLastDDLTimestampRequest
getValidateDDLTimestampRequest(TableRef tableRef) throws TableNotFoundException
{
+ RegionServerEndpointProtos.ValidateLastDDLTimestampRequest.Builder
requestBuilder
+ =
RegionServerEndpointProtos.ValidateLastDDLTimestampRequest.newBuilder();
+ RegionServerEndpointProtos.LastDDLTimestampRequest.Builder innerBuilder
+ =
RegionServerEndpointProtos.LastDDLTimestampRequest.newBuilder();
+
+
+ setLastDDLTimestampRequestParameters(innerBuilder,
tableRef.getTable());
+ requestBuilder.addLastDDLTimestampRequests(innerBuilder);
+
+ /*
+ when querying a view, we need to validate last ddl timestamps for all
its ancestors.
+ */
+ if (tableRef.getTable().getType().equals(PTableType.VIEW)) {
+ PTable pTable = tableRef.getTable();
+ while (pTable.getParentName() != null) {
+ PTable parentTable = this.connection.getTable(new
PTableKey(this.connection.getTenantId(), pTable.getParentName().getString()));
+ innerBuilder =
RegionServerEndpointProtos.LastDDLTimestampRequest.newBuilder();
+ setLastDDLTimestampRequestParameters(innerBuilder,
parentTable);
+ requestBuilder.addLastDDLTimestampRequests(innerBuilder);
+ pTable = parentTable;
+ }
+ }
+ return requestBuilder.build();
+ }
+
+ /**
+ * Verifies that table metadata in client cache is up-to-date with server.
+ * A random live region server is picked for invoking the RPC to validate
LastDDLTimestamp.
+ * Retry once if there was an error performing the RPC, otherwise throw
the Exception.
+ * @param tableRef
+ * @throws SQLException
+ */
+ private void validateLastDDLTimestamp(TableRef tableRef, boolean doRetry)
throws SQLException {
+
+ String infoString = getInfoString(tableRef);
+ try (Admin admin = this.connection.getQueryServices().getAdmin()) {
+ // get all live region servers
+ List<ServerName> regionServers =
this.connection.getQueryServices().getLiveRegionServers();
+ // pick one at random
+ ServerName regionServer
+ =
regionServers.get(ThreadLocalRandom.current().nextInt(regionServers.size()));
+
+ LOGGER.debug("Sending DDL timestamp validation request for {} to
regionserver {}",
+ infoString, regionServer);
+
+ // RPC
+ CoprocessorRpcChannel channel =
admin.coprocessorService(regionServer);
+ PhoenixRegionServerEndpoint.BlockingInterface service
+ = PhoenixRegionServerEndpoint.newBlockingStub(channel);
+ service.validateLastDDLTimestamp(null,
getValidateDDLTimestampRequest(tableRef));
+ } catch (Exception e) {
+ SQLException parsedException = ServerUtil.parseServerException(e);
+ if (parsedException instanceof StaleMetadataCacheException) {
+ throw parsedException;
+ }
+ //retry once for any exceptions other than
StaleMetadataCacheException
+ LOGGER.error("Error in validating DDL timestamp for {}",
infoString, parsedException);
+ if (doRetry) {
+ // update the list of live region servers
+ this.connection.getQueryServices().refreshLiveRegionServers();
+ validateLastDDLTimestamp(tableRef, false);
+ return;
+ }
+ throw parsedException;
+ }
+ // do nothing if the validation succeeded.
Review Comment:
Is this comment really needed? We already _did_ validate last ddl timestamp.
:)
> Create a new RPC to validate last ddl timestamp for read requests.
> ------------------------------------------------------------------
>
> Key: PHOENIX-7025
> URL: https://issues.apache.org/jira/browse/PHOENIX-7025
> Project: Phoenix
> Issue Type: Sub-task
> Reporter: Rushabh Shah
> Assignee: Palash Chauhan
> Priority: Major
>
> Introduce a new RPC request from phoenix client to any region server via
> PhoenixRegionServerEndpoint#validateLastDDLTimestamp. Since the last ddl
> timestamp cache is maintained by all the regionservers, you can choose any
> regionserver randomly. In future, we can make this rpc more resilient by
> sending this rpc to multiple regionservers simultaneously.
> If phoenix client throws StaleMetadataCacheException then invalidate the
> cache on the client side and retry executeQuery method while fetching the
> updated metadata from SYSCAT regionserver.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)