[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801070#comment-17801070 ] ASF GitHub Bot commented on PHOENIX-6968: - shahrs87 commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1437921306 ## phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java: ## @@ -3416,6 +3437,167 @@ private MetaDataMutationResult mutateColumn( } } +/** + * Invalidate metadata cache from all region servers for the given tenant and table name. + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @throws Throwable + */ +private void invalidateServerMetadataCache(byte[] tenantId, byte[]schemaName, +byte[] tableOrViewName) throws Throwable { +Configuration conf = env.getConfiguration(); +String value = conf.get(REGIONSERVER_COPROCESSOR_CONF_KEY); +if (value == null +|| !value.contains(PhoenixRegionServerEndpoint.class.getName())) { +// PhoenixRegionServerEndpoint is not loaded. We don't have to invalidate the cache. +LOGGER.info("Skip invalidating server metadata cache for tenantID: {}," ++ " schema name: {}, table Name: {} since PhoenixRegionServerEndpoint" ++ " is not loaded", Bytes.toString(tenantId), +Bytes.toString(schemaName), Bytes.toString(tableOrViewName)); +return; +} +Properties properties = new Properties(); +// Skip checking of system table existence since the system tables should have created +// by now. +properties.setProperty(SKIP_SYSTEM_TABLES_EXISTENCE_CHECK, "true"); +try (PhoenixConnection connection = QueryUtil.getConnectionOnServer(properties, +env.getConfiguration()).unwrap(PhoenixConnection.class); + Admin admin = connection.getQueryServices().getAdmin()) { +// This will incur an extra RPC to the master. This RPC is required since we want to +// get current list of regionservers. +Collection serverNames = admin.getRegionServers(true); +invalidateServerMetadataCacheWithRetries(admin, serverNames, tenantId, schemaName, +tableOrViewName, false); +} +} + +/** + * Invalidate metadata cache on all regionservers with retries for the given tenantID + * and tableName with retries. We retry once before failing the operation. + * + * @param admin + * @param serverNames + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @param isRetry + * @throws Throwable + */ +private void invalidateServerMetadataCacheWithRetries(Admin admin, +Collection serverNames, byte[] tenantId, byte[] schemaName, +byte[] tableOrViewName, boolean isRetry) throws Throwable { +String fullTableName = SchemaUtil.getTableName(schemaName, tableOrViewName); +String tenantIDStr = Bytes.toString(tenantId); +LOGGER.info("Invalidating metadata cache for tenantID: {}, tableName: {} for" ++ " region servers: {}, isRetry: {}", tenantIDStr, fullTableName, +serverNames, isRetry); +RegionServerEndpointProtos.InvalidateServerMetadataCacheRequest request = +getRequest(tenantId, schemaName, tableOrViewName); +// TODO Do I need my own executor or can I re-use QueryServices#Executor +// since it is supposed to be used only for scans according to documentation? +List> futures = new ArrayList<>(); +Map map = new HashMap<>(); +for (ServerName serverName : serverNames) { +CompletableFuture future = CompletableFuture.runAsync(() -> { +try { +PhoenixStopWatch innerWatch = new PhoenixStopWatch().start(); +// TODO Using the same as ServerCacheClient but need to think if we need some +// special controller for invalidating cache since this is in the path of +// DDL operations. We also need to think of we need separate RPC handler +// threads for this? +ServerRpcController controller = new ServerRpcController(); + RegionServerEndpointProtos.RegionServerEndpointService.BlockingInterface +service = RegionServerEndpointProtos.RegionServerEndpointService + .newBlockingStub(admin.coprocessorService(serverName)); +LOGGER.info("Sending invalidate metadata cache for tenantID: {}, tableName: {}" ++ " to region server: {}", tenantIDStr, fullTableName, serverName); +
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801069#comment-17801069 ] ASF GitHub Bot commented on PHOENIX-6968: - shahrs87 commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1437919100 ## phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java: ## @@ -3416,6 +3437,167 @@ private MetaDataMutationResult mutateColumn( } } +/** + * Invalidate metadata cache from all region servers for the given tenant and table name. + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @throws Throwable + */ +private void invalidateServerMetadataCache(byte[] tenantId, byte[]schemaName, +byte[] tableOrViewName) throws Throwable { +Configuration conf = env.getConfiguration(); +String value = conf.get(REGIONSERVER_COPROCESSOR_CONF_KEY); +if (value == null +|| !value.contains(PhoenixRegionServerEndpoint.class.getName())) { +// PhoenixRegionServerEndpoint is not loaded. We don't have to invalidate the cache. +LOGGER.info("Skip invalidating server metadata cache for tenantID: {}," ++ " schema name: {}, table Name: {} since PhoenixRegionServerEndpoint" ++ " is not loaded", Bytes.toString(tenantId), +Bytes.toString(schemaName), Bytes.toString(tableOrViewName)); +return; +} +Properties properties = new Properties(); +// Skip checking of system table existence since the system tables should have created +// by now. +properties.setProperty(SKIP_SYSTEM_TABLES_EXISTENCE_CHECK, "true"); +try (PhoenixConnection connection = QueryUtil.getConnectionOnServer(properties, +env.getConfiguration()).unwrap(PhoenixConnection.class); + Admin admin = connection.getQueryServices().getAdmin()) { +// This will incur an extra RPC to the master. This RPC is required since we want to +// get current list of regionservers. +Collection serverNames = admin.getRegionServers(true); +invalidateServerMetadataCacheWithRetries(admin, serverNames, tenantId, schemaName, +tableOrViewName, false); +} +} + +/** + * Invalidate metadata cache on all regionservers with retries for the given tenantID + * and tableName with retries. We retry once before failing the operation. + * + * @param admin + * @param serverNames + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @param isRetry + * @throws Throwable + */ +private void invalidateServerMetadataCacheWithRetries(Admin admin, +Collection serverNames, byte[] tenantId, byte[] schemaName, +byte[] tableOrViewName, boolean isRetry) throws Throwable { +String fullTableName = SchemaUtil.getTableName(schemaName, tableOrViewName); +String tenantIDStr = Bytes.toString(tenantId); +LOGGER.info("Invalidating metadata cache for tenantID: {}, tableName: {} for" ++ " region servers: {}, isRetry: {}", tenantIDStr, fullTableName, +serverNames, isRetry); +RegionServerEndpointProtos.InvalidateServerMetadataCacheRequest request = +getRequest(tenantId, schemaName, tableOrViewName); +// TODO Do I need my own executor or can I re-use QueryServices#Executor +// since it is supposed to be used only for scans according to documentation? +List> futures = new ArrayList<>(); +Map map = new HashMap<>(); +for (ServerName serverName : serverNames) { +CompletableFuture future = CompletableFuture.runAsync(() -> { +try { +PhoenixStopWatch innerWatch = new PhoenixStopWatch().start(); +// TODO Using the same as ServerCacheClient but need to think if we need some +// special controller for invalidating cache since this is in the path of +// DDL operations. We also need to think of we need separate RPC handler +// threads for this? +ServerRpcController controller = new ServerRpcController(); + RegionServerEndpointProtos.RegionServerEndpointService.BlockingInterface +service = RegionServerEndpointProtos.RegionServerEndpointService + .newBlockingStub(admin.coprocessorService(serverName)); +LOGGER.info("Sending invalidate metadata cache for tenantID: {}, tableName: {}" ++ " to region server: {}", tenantIDStr, fullTableName, serverName); +
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801068#comment-17801068 ] ASF GitHub Bot commented on PHOENIX-6968: - shahrs87 commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1437918859 ## phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java: ## @@ -3416,6 +3437,167 @@ private MetaDataMutationResult mutateColumn( } } +/** + * Invalidate metadata cache from all region servers for the given tenant and table name. + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @throws Throwable + */ +private void invalidateServerMetadataCache(byte[] tenantId, byte[]schemaName, +byte[] tableOrViewName) throws Throwable { +Configuration conf = env.getConfiguration(); +String value = conf.get(REGIONSERVER_COPROCESSOR_CONF_KEY); +if (value == null +|| !value.contains(PhoenixRegionServerEndpoint.class.getName())) { +// PhoenixRegionServerEndpoint is not loaded. We don't have to invalidate the cache. +LOGGER.info("Skip invalidating server metadata cache for tenantID: {}," ++ " schema name: {}, table Name: {} since PhoenixRegionServerEndpoint" ++ " is not loaded", Bytes.toString(tenantId), +Bytes.toString(schemaName), Bytes.toString(tableOrViewName)); +return; +} +Properties properties = new Properties(); +// Skip checking of system table existence since the system tables should have created +// by now. +properties.setProperty(SKIP_SYSTEM_TABLES_EXISTENCE_CHECK, "true"); +try (PhoenixConnection connection = QueryUtil.getConnectionOnServer(properties, +env.getConfiguration()).unwrap(PhoenixConnection.class); + Admin admin = connection.getQueryServices().getAdmin()) { +// This will incur an extra RPC to the master. This RPC is required since we want to +// get current list of regionservers. +Collection serverNames = admin.getRegionServers(true); +invalidateServerMetadataCacheWithRetries(admin, serverNames, tenantId, schemaName, +tableOrViewName, false); +} +} + +/** + * Invalidate metadata cache on all regionservers with retries for the given tenantID + * and tableName with retries. We retry once before failing the operation. + * + * @param admin + * @param serverNames + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @param isRetry + * @throws Throwable + */ +private void invalidateServerMetadataCacheWithRetries(Admin admin, +Collection serverNames, byte[] tenantId, byte[] schemaName, +byte[] tableOrViewName, boolean isRetry) throws Throwable { +String fullTableName = SchemaUtil.getTableName(schemaName, tableOrViewName); +String tenantIDStr = Bytes.toString(tenantId); +LOGGER.info("Invalidating metadata cache for tenantID: {}, tableName: {} for" ++ " region servers: {}, isRetry: {}", tenantIDStr, fullTableName, +serverNames, isRetry); +RegionServerEndpointProtos.InvalidateServerMetadataCacheRequest request = +getRequest(tenantId, schemaName, tableOrViewName); +// TODO Do I need my own executor or can I re-use QueryServices#Executor +// since it is supposed to be used only for scans according to documentation? +List> futures = new ArrayList<>(); +Map map = new HashMap<>(); +for (ServerName serverName : serverNames) { +CompletableFuture future = CompletableFuture.runAsync(() -> { +try { +PhoenixStopWatch innerWatch = new PhoenixStopWatch().start(); +// TODO Using the same as ServerCacheClient but need to think if we need some +// special controller for invalidating cache since this is in the path of +// DDL operations. We also need to think of we need separate RPC handler +// threads for this? +ServerRpcController controller = new ServerRpcController(); + RegionServerEndpointProtos.RegionServerEndpointService.BlockingInterface +service = RegionServerEndpointProtos.RegionServerEndpointService + .newBlockingStub(admin.coprocessorService(serverName)); +LOGGER.info("Sending invalidate metadata cache for tenantID: {}, tableName: {}" ++ " to region server: {}", tenantIDStr, fullTableName, serverName); +
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801066#comment-17801066 ] ASF GitHub Bot commented on PHOENIX-6968: - shahrs87 commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1437917968 ## phoenix-core/src/main/protobuf/RegionServerEndpointService.proto: ## @@ -36,7 +36,24 @@ message LastDDLTimestampRequest { required int64 lastDDLTimestamp = 4; } -service RegionServerCoprocService { +message InvalidateServerMetadataCache { Review Comment: We haven't deployed the previous changes anywhere. This was still in the dev branch so it was not important to handle backwards compatibility. But I am not sure how we would handle the backwards compatibility once the release was cut off. > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further optimize by invalidating > only on RS that has that cache entry. > In PHOENIX-6988 we created PhoenixRegionServerEndpoint implementing > RegionServerCoprocessor. We can create a new method in this co-proc something > like invalidateCache(CacheEntry) to invalidate cache for a given > table/view/index. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17771729#comment-17771729 ] ASF GitHub Bot commented on PHOENIX-6968: - haridsv commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1344390663 ## phoenix-core/src/main/protobuf/RegionServerEndpointService.proto: ## @@ -36,7 +36,24 @@ message LastDDLTimestampRequest { required int64 lastDDLTimestamp = 4; } -service RegionServerCoprocService { +message InvalidateServerMetadataCache { + // Will be HConstants.EMPTY_BYTE_ARRAY if tenantID or schema name is null. + required bytes tenantId = 1; + required bytes schemaName = 2; + required bytes tableName = 3; Review Comment: Since 3 fields are common with that of `LastDDLTimestampRequest`, we could avoid duplication if we use composition (unfortunately, protobuf has no inheritance). ## phoenix-core/src/main/java/org/apache/phoenix/util/ServerUtil.java: ## @@ -432,4 +434,17 @@ public static boolean isHBaseNamespaceAvailable(Admin admin, String schemaName) String[] hbaseNamespaces = admin.listNamespaces(); return Arrays.asList(hbaseNamespaces).contains(schemaName); } + +/** + * Convert ServiceException into an IOException + * @param se ServiceException + * @return IOException + */ +public static IOException parseServiceException(ServiceException se) { +Throwable cause = se.getCause(); +if (cause != null && cause instanceof IOException) { +return (IOException) cause; +} +return new IOException(se); Review Comment: Should this be `new IOException(cause)`? ## phoenix-core/src/main/java/org/apache/phoenix/coprocessor/PhoenixRegionServerEndpoint.java: ## @@ -73,4 +76,27 @@ public void validateLastDDLTimestamp(RpcController controller, } } } + +@Override +public void invalidateServerMetadataCache(RpcController controller, +RegionServerEndpointProtos.InvalidateServerMetadataCacheRequest request, + RpcCallback done) { +for (RegionServerEndpointProtos.InvalidateServerMetadataCache invalidateCacheRequest +: request.getInvalidateServerMetadataCacheRequestsList()) { +byte[] tenantID = invalidateCacheRequest.getTenantId().toByteArray(); +byte[] schemaName = invalidateCacheRequest.getSchemaName().toByteArray(); +byte[] tableName = invalidateCacheRequest.getTableName().toByteArray(); +String fullTableName = SchemaUtil.getTableName(schemaName, tableName); +String tenantIDStr = Bytes.toString(tenantID); Review Comment: If there is composition at the protobuf level for the first 3 fields, then these 5 lines that are common between this function and `validateLastDDLTimestamp` could be processed as a common utility. ## phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java: ## @@ -3416,6 +3437,167 @@ private MetaDataMutationResult mutateColumn( } } +/** + * Invalidate metadata cache from all region servers for the given tenant and table name. + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @throws Throwable + */ +private void invalidateServerMetadataCache(byte[] tenantId, byte[]schemaName, +byte[] tableOrViewName) throws Throwable { +Configuration conf = env.getConfiguration(); +String value = conf.get(REGIONSERVER_COPROCESSOR_CONF_KEY); +if (value == null +|| !value.contains(PhoenixRegionServerEndpoint.class.getName())) { +// PhoenixRegionServerEndpoint is not loaded. We don't have to invalidate the cache. +LOGGER.info("Skip invalidating server metadata cache for tenantID: {}," ++ " schema name: {}, table Name: {} since PhoenixRegionServerEndpoint" ++ " is not loaded", Bytes.toString(tenantId), +Bytes.toString(schemaName), Bytes.toString(tableOrViewName)); +return; +} +Properties properties = new Properties(); +// Skip checking of system table existence since the system tables should have created +// by now. +properties.setProperty(SKIP_SYSTEM_TABLES_EXISTENCE_CHECK, "true"); +try (PhoenixConnection connection = QueryUtil.getConnectionOnServer(properties, +env.getConfiguration()).unwrap(PhoenixConnection.class); + Admin admin = connection.getQueryServices().getAdmin()) { +// This will incur an extra RPC to the master. This RPC is required since we want to +// get current list of regionservers. +Collection serverNames = admin.getRegionServers(true); +
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17771286#comment-17771286 ] ASF GitHub Bot commented on PHOENIX-6968: - shahrs87 merged PR #1691: URL: https://github.com/apache/phoenix/pull/1691 > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further optimize by invalidating > only on RS that has that cache entry. > In PHOENIX-6988 we created PhoenixRegionServerEndpoint implementing > RegionServerCoprocessor. We can create a new method in this co-proc something > like invalidateCache(CacheEntry) to invalidate cache for a given > table/view/index. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17771280#comment-17771280 ] ASF GitHub Bot commented on PHOENIX-6968: - tkhurana commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1343210767 ## phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java: ## @@ -3416,6 +3437,167 @@ private MetaDataMutationResult mutateColumn( } } +/** + * Invalidate metadata cache from all region servers for the given tenant and table name. + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @throws Throwable + */ +private void invalidateServerMetadataCache(byte[] tenantId, byte[]schemaName, +byte[] tableOrViewName) throws Throwable { +Configuration conf = env.getConfiguration(); +String value = conf.get(REGIONSERVER_COPROCESSOR_CONF_KEY); +if (value == null +|| !value.contains(PhoenixRegionServerEndpoint.class.getName())) { +// PhoenixRegionServerEndpoint is not loaded. We don't have to invalidate the cache. +LOGGER.info("Skip invalidating server metadata cache for tenantID: {}," ++ " schema name: {}, table Name: {} since PhoenixRegionServerEndpoint" ++ " is not loaded", Bytes.toString(tenantId), +Bytes.toString(schemaName), Bytes.toString(tableOrViewName)); +return; +} +Properties properties = new Properties(); +// Skip checking of system table existence since the system tables should have created +// by now. +properties.setProperty(SKIP_SYSTEM_TABLES_EXISTENCE_CHECK, "true"); +try (PhoenixConnection connection = QueryUtil.getConnectionOnServer(properties, +env.getConfiguration()).unwrap(PhoenixConnection.class); + Admin admin = connection.getQueryServices().getAdmin()) { +// This will incur an extra RPC to the master. This RPC is required since we want to +// get current list of regionservers. +Collection serverNames = admin.getRegionServers(true); +invalidateServerMetadataCacheWithRetries(admin, serverNames, tenantId, schemaName, +tableOrViewName, false); +} +} + +/** + * Invalidate metadata cache on all regionservers with retries for the given tenantID + * and tableName with retries. We retry once before failing the operation. + * + * @param admin + * @param serverNames + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @param retryAttempt + * @throws Throwable + */ +private void invalidateServerMetadataCacheWithRetries(Admin admin, +Collection serverNames, byte[] tenantId, byte[] schemaName, +byte[] tableOrViewName, boolean retryAttempt) throws Throwable { +String fullTableName = SchemaUtil.getTableName(schemaName, tableOrViewName); +String tenantIDStr = Bytes.toString(tenantId); +LOGGER.info("Invalidating metadata cache for tenantID: {}, tableName: {} for" ++ " region servers: {}, isRetryAttempt: {}", tenantIDStr, fullTableName, +serverNames, retryAttempt); +RegionServerEndpointProtos.InvalidateServerMetadataCacheRequest request = +getRequest(tenantId, schemaName, tableOrViewName); +// TODO Do I need my own executor or can I re-use QueryServices#Executor Review Comment: Sounds good @shahrs87 > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further optimize by invalidating > only on RS that has that cache entry. > In PHOENIX-6988 we created PhoenixRegionServerEndpoint implementing > RegionServerCoprocessor. We can create a new method in this co-proc something > like invalidateCache(CacheEntry) to invalidate cache for a given > table/view/index. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17771272#comment-17771272 ] ASF GitHub Bot commented on PHOENIX-6968: - shahrs87 commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1343173628 ## phoenix-core/src/it/java/org/apache/phoenix/end2end/FailingPhoenixRegionServerEndpoint.java: ## @@ -0,0 +1,72 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.phoenix.end2end; + +import com.google.protobuf.RpcCallback; +import com.google.protobuf.RpcController; +import org.apache.phoenix.coprocessor.PhoenixRegionServerEndpoint; +import org.apache.phoenix.coprocessor.generated.RegionServerEndpointProtos; +import org.apache.phoenix.protobuf.ProtobufUtil; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; + +import static org.apache.phoenix.coprocessor.MetaDataEndpointImpl.PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS; +import static org.apache.phoenix.coprocessor.MetaDataEndpointImpl.PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS_DEFAULT; + +public class FailingPhoenixRegionServerEndpoint extends PhoenixRegionServerEndpoint { +private static final Logger LOGGER = LoggerFactory.getLogger(FailingPhoenixRegionServerEndpoint.class); + +private boolean throwException; +private boolean shouldSleep; + +@Override +public void invalidateServerMetadataCache(RpcController controller, +RegionServerEndpointProtos.InvalidateServerMetadataCacheRequest request, + RpcCallback done) { +long metadataCacheInvalidationTimeoutMs = conf.getLong( +PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS, +PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS_DEFAULT); + +if (throwException == true) { +IOException ioe = new IOException("On purpose"); +ProtobufUtil.setControllerException(controller, ioe); +return; +} else if (shouldSleep) { +try { +// Sleeping for 2 seconds more than metadataCacheInvalidationTimeoutMs. +Thread.sleep(metadataCacheInvalidationTimeoutMs + 2000); +} catch (InterruptedException e) { +LOGGER.warn("Exception while sleeping in FailingPhoenixRegionServerEndpoint", e); +} +} else { +LOGGER.info("Invalidating server metadata cache"); +} +} + +public void throwException() { +this.throwException = true; +this.shouldSleep = false; +} + +public void sleep() { Review Comment: Addressed in latest commit. Please review again. Thank you. ## phoenix-core/src/it/java/org/apache/phoenix/end2end/InvalidateMetadataCacheIT.java: ## @@ -0,0 +1,141 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.phoenix.end2end; + +import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.phoenix.query.BaseTest; +import org.apache.phoenix.thirdparty.com.google.common.collect.Maps; +import org.apache.phoenix.util.PropertiesUtil; +import org.apache.phoenix.util.ReadOnlyProps; +import org.junit.BeforeClass; +import org.junit.Test; +import org.junit.experimental.categories.Category; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.sql.Connection;
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770768#comment-17770768 ] ASF GitHub Bot commented on PHOENIX-6968: - shahrs87 commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1342004085 ## phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java: ## @@ -3165,10 +3184,12 @@ private MetaDataMutationResult mutateColumn( if (mutationResult.isPresent()) { return mutationResult.get(); } - +// We take a write row lock for tenantId, schemaName, tableOrViewName acquireLock(region, key, locks); +// Invalidate the cache from all the regionservers. +invalidateServerMetadataCache(tenantId, schemaName, tableOrViewName); Review Comment: > Will that come in a separate JIRA ? Yes. > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further optimize by invalidating > only on RS that has that cache entry. > In PHOENIX-6988 we created PhoenixRegionServerEndpoint implementing > RegionServerCoprocessor. We can create a new method in this co-proc something > like invalidateCache(CacheEntry) to invalidate cache for a given > table/view/index. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770767#comment-17770767 ] ASF GitHub Bot commented on PHOENIX-6968: - shahrs87 commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1342004049 ## phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java: ## @@ -3165,10 +3184,12 @@ private MetaDataMutationResult mutateColumn( if (mutationResult.isPresent()) { return mutationResult.get(); } - +// We take a write row lock for tenantId, schemaName, tableOrViewName acquireLock(region, key, locks); +// Invalidate the cache from all the regionservers. +invalidateServerMetadataCache(tenantId, schemaName, tableOrViewName); Review Comment: > We need to call invalidate cache in other scenarios also like create index. Yes, I have already created jira to update LAST_DDL_TIMESTAMP of the parent table/view when we add an index. [PHOENIX-7056](https://issues.apache.org/jira/browse/PHOENIX-7056) > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further optimize by invalidating > only on RS that has that cache entry. > In PHOENIX-6988 we created PhoenixRegionServerEndpoint implementing > RegionServerCoprocessor. We can create a new method in this co-proc something > like invalidateCache(CacheEntry) to invalidate cache for a given > table/view/index. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770765#comment-17770765 ] ASF GitHub Bot commented on PHOENIX-6968: - shahrs87 commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1342003895 ## phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java: ## @@ -3416,6 +3437,167 @@ private MetaDataMutationResult mutateColumn( } } +/** + * Invalidate metadata cache from all region servers for the given tenant and table name. + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @throws Throwable + */ +private void invalidateServerMetadataCache(byte[] tenantId, byte[]schemaName, +byte[] tableOrViewName) throws Throwable { +Configuration conf = env.getConfiguration(); +String value = conf.get(REGIONSERVER_COPROCESSOR_CONF_KEY); +if (value == null +|| !value.contains(PhoenixRegionServerEndpoint.class.getName())) { +// PhoenixRegionServerEndpoint is not loaded. We don't have to invalidate the cache. +LOGGER.info("Skip invalidating server metadata cache for tenantID: {}," ++ " schema name: {}, table Name: {} since PhoenixRegionServerEndpoint" ++ " is not loaded", Bytes.toString(tenantId), +Bytes.toString(schemaName), Bytes.toString(tableOrViewName)); +return; +} +Properties properties = new Properties(); +// Skip checking of system table existence since the system tables should have created +// by now. +properties.setProperty(SKIP_SYSTEM_TABLES_EXISTENCE_CHECK, "true"); +try (PhoenixConnection connection = QueryUtil.getConnectionOnServer(properties, +env.getConfiguration()).unwrap(PhoenixConnection.class); + Admin admin = connection.getQueryServices().getAdmin()) { +// This will incur an extra RPC to the master. This RPC is required since we want to +// get current list of regionservers. +Collection serverNames = admin.getRegionServers(true); +invalidateServerMetadataCacheWithRetries(admin, serverNames, tenantId, schemaName, +tableOrViewName, false); +} +} + +/** + * Invalidate metadata cache on all regionservers with retries for the given tenantID + * and tableName with retries. We retry once before failing the operation. + * + * @param admin + * @param serverNames + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @param retryAttempt + * @throws Throwable + */ +private void invalidateServerMetadataCacheWithRetries(Admin admin, +Collection serverNames, byte[] tenantId, byte[] schemaName, +byte[] tableOrViewName, boolean retryAttempt) throws Throwable { +String fullTableName = SchemaUtil.getTableName(schemaName, tableOrViewName); +String tenantIDStr = Bytes.toString(tenantId); +LOGGER.info("Invalidating metadata cache for tenantID: {}, tableName: {} for" ++ " region servers: {}, isRetryAttempt: {}", tenantIDStr, fullTableName, +serverNames, retryAttempt); +RegionServerEndpointProtos.InvalidateServerMetadataCacheRequest request = +getRequest(tenantId, schemaName, tableOrViewName); +// TODO Do I need my own executor or can I re-use QueryServices#Executor Review Comment: > Maybe it is better to create a new executor so that you can control its size @tkhurana Do you think currently we can keep it as it is and use custom threadpool if needed? > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further optimize by invalidating > only on RS that has that cache entry. > In PHOENIX-6988 we created PhoenixRegionServerEndpoint implementing > RegionServerCoprocessor. We can create a new method in this co-proc something > like invalidateCache(CacheEntry) to invalidate cache for a
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770766#comment-17770766 ] ASF GitHub Bot commented on PHOENIX-6968: - shahrs87 commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1342003921 ## phoenix-core/src/it/java/org/apache/phoenix/end2end/InvalidateMetadataCacheIT.java: ## @@ -0,0 +1,141 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.phoenix.end2end; + +import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.phoenix.query.BaseTest; +import org.apache.phoenix.thirdparty.com.google.common.collect.Maps; +import org.apache.phoenix.util.PropertiesUtil; +import org.apache.phoenix.util.ReadOnlyProps; +import org.junit.BeforeClass; +import org.junit.Test; +import org.junit.experimental.categories.Category; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.util.Map; +import java.util.Properties; + +import static org.apache.hadoop.hbase.coprocessor.CoprocessorHost.REGIONSERVER_COPROCESSOR_CONF_KEY; +import static org.apache.phoenix.coprocessor.MetaDataEndpointImpl.PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS; +import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES; +import static org.junit.Assert.fail; + +@Category({NeedsOwnMiniClusterTest.class }) +public class InvalidateMetadataCacheIT extends BaseTest { +private static final Logger LOGGER = LoggerFactory.getLogger(InvalidateMetadataCacheIT.class); + +@BeforeClass +public static synchronized void doSetup() throws Exception { +NUM_SLAVES_BASE = 2; +Map props = Maps.newHashMapWithExpectedSize(1); +// to fail fast in case of exception. +props.put("hbase.client.retries.number", String.valueOf(0)); +props.put(REGIONSERVER_COPROCESSOR_CONF_KEY, +FailingPhoenixRegionServerEndpoint.class.getName()); +// Setting phoenix metadata cache invalidation timeout to a small number to fail fast. +props.put(PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS, String.valueOf(2000)); +setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator())); +} + +/** + * Add FailingPhoenixRegionServerEndpoint as regionserver co-processor. + * Make one of the regionserver sleep in invalidateServerMetadataCache method. This will trigger + * TimeoutException in MetadataEndpointImpl#invalidateServerMetadataCacheWithRetries method. + * Make sure that ALTER TABLE ADD COLUMN statement fails. + */ +@Test +public void testAddColumnWithTimeout() { +Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES); +String dataTableFullName = generateUniqueName(); +String ddl = getCreateTableStmt(dataTableFullName); +HRegionServer regionServerZero = utility.getMiniHBaseCluster().getRegionServer(0); +FailingPhoenixRegionServerEndpoint coprocForRS0 = +getFailingPhoenixRegionServerEndpoint(regionServerZero); +coprocForRS0.sleep(); +try(Connection conn = DriverManager.getConnection(getUrl(), props)) { +conn.createStatement().execute(ddl); +conn.createStatement().execute("ALTER TABLE " + dataTableFullName + +" ADD CF.col2 integer"); +fail("Shouldn't reach heere"); +} catch (Exception e) { +LOGGER.error("Exception while adding column", e); +// This is expected +} +} + +/** + * Add FailingPhoenixRegionServerEndpoint as regionserver co-processor. + * Make one of the regionserver throw Exception in invalidateServerMetadataCache method. + * Make sure that ALTER TABLE ADD COLUMN statement fails. + */ +@Test +public void testAddColumnWithOneRSFailing() { +Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES); +String dataTableFullName = generateUniqueName(); +String ddl = getCreateTableStmt(dataTableFullName); +HRegionServer regionServerZero =
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770763#comment-17770763 ] ASF GitHub Bot commented on PHOENIX-6968: - shahrs87 commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1342003712 ## phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java: ## @@ -3416,6 +3437,167 @@ private MetaDataMutationResult mutateColumn( } } +/** + * Invalidate metadata cache from all region servers for the given tenant and table name. + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @throws Throwable + */ +private void invalidateServerMetadataCache(byte[] tenantId, byte[]schemaName, +byte[] tableOrViewName) throws Throwable { +Configuration conf = env.getConfiguration(); +String value = conf.get(REGIONSERVER_COPROCESSOR_CONF_KEY); +if (value == null +|| !value.contains(PhoenixRegionServerEndpoint.class.getName())) { +// PhoenixRegionServerEndpoint is not loaded. We don't have to invalidate the cache. +LOGGER.info("Skip invalidating server metadata cache for tenantID: {}," ++ " schema name: {}, table Name: {} since PhoenixRegionServerEndpoint" ++ " is not loaded", Bytes.toString(tenantId), +Bytes.toString(schemaName), Bytes.toString(tableOrViewName)); +return; +} +Properties properties = new Properties(); +// Skip checking of system table existence since the system tables should have created +// by now. +properties.setProperty(SKIP_SYSTEM_TABLES_EXISTENCE_CHECK, "true"); +try (PhoenixConnection connection = QueryUtil.getConnectionOnServer(properties, +env.getConfiguration()).unwrap(PhoenixConnection.class); + Admin admin = connection.getQueryServices().getAdmin()) { +// This will incur an extra RPC to the master. This RPC is required since we want to +// get current list of regionservers. +Collection serverNames = admin.getRegionServers(true); +invalidateServerMetadataCacheWithRetries(admin, serverNames, tenantId, schemaName, +tableOrViewName, false); +} +} + +/** + * Invalidate metadata cache on all regionservers with retries for the given tenantID + * and tableName with retries. We retry once before failing the operation. + * + * @param admin + * @param serverNames + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @param retryAttempt + * @throws Throwable + */ +private void invalidateServerMetadataCacheWithRetries(Admin admin, +Collection serverNames, byte[] tenantId, byte[] schemaName, +byte[] tableOrViewName, boolean retryAttempt) throws Throwable { +String fullTableName = SchemaUtil.getTableName(schemaName, tableOrViewName); +String tenantIDStr = Bytes.toString(tenantId); +LOGGER.info("Invalidating metadata cache for tenantID: {}, tableName: {} for" ++ " region servers: {}, isRetryAttempt: {}", tenantIDStr, fullTableName, +serverNames, retryAttempt); +RegionServerEndpointProtos.InvalidateServerMetadataCacheRequest request = +getRequest(tenantId, schemaName, tableOrViewName); +// TODO Do I need my own executor or can I re-use QueryServices#Executor Review Comment: > What is the threadpool size of the default executor ? https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ForkJoinPool.html#ForkJoinPool-- Currently default is ForkJoinPool which uses the number of threads as many as available processors. > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further optimize by invalidating > only on RS that has that cache entry. > In PHOENIX-6988 we created PhoenixRegionServerEndpoint implementing > RegionServerCoprocessor. We can create a new method in this
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770761#comment-17770761 ] ASF GitHub Bot commented on PHOENIX-6968: - shahrs87 commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1342003493 ## phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java: ## @@ -3416,6 +3437,167 @@ private MetaDataMutationResult mutateColumn( } } +/** + * Invalidate metadata cache from all region servers for the given tenant and table name. + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @throws Throwable + */ +private void invalidateServerMetadataCache(byte[] tenantId, byte[]schemaName, +byte[] tableOrViewName) throws Throwable { +Configuration conf = env.getConfiguration(); +String value = conf.get(REGIONSERVER_COPROCESSOR_CONF_KEY); +if (value == null +|| !value.contains(PhoenixRegionServerEndpoint.class.getName())) { +// PhoenixRegionServerEndpoint is not loaded. We don't have to invalidate the cache. +LOGGER.info("Skip invalidating server metadata cache for tenantID: {}," ++ " schema name: {}, table Name: {} since PhoenixRegionServerEndpoint" ++ " is not loaded", Bytes.toString(tenantId), +Bytes.toString(schemaName), Bytes.toString(tableOrViewName)); +return; +} +Properties properties = new Properties(); +// Skip checking of system table existence since the system tables should have created +// by now. +properties.setProperty(SKIP_SYSTEM_TABLES_EXISTENCE_CHECK, "true"); +try (PhoenixConnection connection = QueryUtil.getConnectionOnServer(properties, +env.getConfiguration()).unwrap(PhoenixConnection.class); + Admin admin = connection.getQueryServices().getAdmin()) { +// This will incur an extra RPC to the master. This RPC is required since we want to +// get current list of regionservers. +Collection serverNames = admin.getRegionServers(true); +invalidateServerMetadataCacheWithRetries(admin, serverNames, tenantId, schemaName, +tableOrViewName, false); +} +} + +/** + * Invalidate metadata cache on all regionservers with retries for the given tenantID + * and tableName with retries. We retry once before failing the operation. + * + * @param admin + * @param serverNames + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @param retryAttempt + * @throws Throwable + */ +private void invalidateServerMetadataCacheWithRetries(Admin admin, +Collection serverNames, byte[] tenantId, byte[] schemaName, +byte[] tableOrViewName, boolean retryAttempt) throws Throwable { Review Comment: > also when I looked at the function name "WithRetries" I thought there would a RetryManager that would use some retry strategy. We can add RetryManager later if needed. We already have method name `invalidateServerMetadataCache` above. > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further optimize by invalidating > only on RS that has that cache entry. > In PHOENIX-6988 we created PhoenixRegionServerEndpoint implementing > RegionServerCoprocessor. We can create a new method in this co-proc something > like invalidateCache(CacheEntry) to invalidate cache for a given > table/view/index. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770651#comment-17770651 ] ASF GitHub Bot commented on PHOENIX-6968: - ranganathg commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1341909060 ## phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java: ## @@ -3416,6 +3437,167 @@ private MetaDataMutationResult mutateColumn( } } +/** + * Invalidate metadata cache from all region servers for the given tenant and table name. + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @throws Throwable + */ +private void invalidateServerMetadataCache(byte[] tenantId, byte[]schemaName, +byte[] tableOrViewName) throws Throwable { +Configuration conf = env.getConfiguration(); +String value = conf.get(REGIONSERVER_COPROCESSOR_CONF_KEY); +if (value == null +|| !value.contains(PhoenixRegionServerEndpoint.class.getName())) { +// PhoenixRegionServerEndpoint is not loaded. We don't have to invalidate the cache. +LOGGER.info("Skip invalidating server metadata cache for tenantID: {}," ++ " schema name: {}, table Name: {} since PhoenixRegionServerEndpoint" ++ " is not loaded", Bytes.toString(tenantId), +Bytes.toString(schemaName), Bytes.toString(tableOrViewName)); +return; +} +Properties properties = new Properties(); +// Skip checking of system table existence since the system tables should have created +// by now. +properties.setProperty(SKIP_SYSTEM_TABLES_EXISTENCE_CHECK, "true"); +try (PhoenixConnection connection = QueryUtil.getConnectionOnServer(properties, +env.getConfiguration()).unwrap(PhoenixConnection.class); + Admin admin = connection.getQueryServices().getAdmin()) { +// This will incur an extra RPC to the master. This RPC is required since we want to +// get current list of regionservers. +Collection serverNames = admin.getRegionServers(true); +invalidateServerMetadataCacheWithRetries(admin, serverNames, tenantId, schemaName, +tableOrViewName, false); +} +} + +/** + * Invalidate metadata cache on all regionservers with retries for the given tenantID + * and tableName with retries. We retry once before failing the operation. + * + * @param admin + * @param serverNames + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @param retryAttempt + * @throws Throwable + */ +private void invalidateServerMetadataCacheWithRetries(Admin admin, +Collection serverNames, byte[] tenantId, byte[] schemaName, +byte[] tableOrViewName, boolean retryAttempt) throws Throwable { Review Comment: also when I looked at the function name "WithRetries" I thought there would a RetryManager that would use some retry strategy. May be the name can be with just invalidateServerMetadataCache - with one param being isRetry? > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further optimize by invalidating > only on RS that has that cache entry. > In PHOENIX-6988 we created PhoenixRegionServerEndpoint implementing > RegionServerCoprocessor. We can create a new method in this co-proc something > like invalidateCache(CacheEntry) to invalidate cache for a given > table/view/index. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770623#comment-17770623 ] ASF GitHub Bot commented on PHOENIX-6968: - tkhurana commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1341867509 ## phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java: ## @@ -3165,10 +3184,12 @@ private MetaDataMutationResult mutateColumn( if (mutationResult.isPresent()) { return mutationResult.get(); } - +// We take a write row lock for tenantId, schemaName, tableOrViewName acquireLock(region, key, locks); +// Invalidate the cache from all the regionservers. +invalidateServerMetadataCache(tenantId, schemaName, tableOrViewName); Review Comment: We need to call invalidate cache in other scenarios also like create index. Will that come in a separate JIRA ? > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further optimize by invalidating > only on RS that has that cache entry. > In PHOENIX-6988 we created PhoenixRegionServerEndpoint implementing > RegionServerCoprocessor. We can create a new method in this co-proc something > like invalidateCache(CacheEntry) to invalidate cache for a given > table/view/index. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770622#comment-17770622 ] ASF GitHub Bot commented on PHOENIX-6968: - tkhurana commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1341851464 ## phoenix-core/src/it/java/org/apache/phoenix/end2end/InvalidateMetadataCacheIT.java: ## @@ -0,0 +1,141 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.phoenix.end2end; + +import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.phoenix.query.BaseTest; +import org.apache.phoenix.thirdparty.com.google.common.collect.Maps; +import org.apache.phoenix.util.PropertiesUtil; +import org.apache.phoenix.util.ReadOnlyProps; +import org.junit.BeforeClass; +import org.junit.Test; +import org.junit.experimental.categories.Category; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.util.Map; +import java.util.Properties; + +import static org.apache.hadoop.hbase.coprocessor.CoprocessorHost.REGIONSERVER_COPROCESSOR_CONF_KEY; +import static org.apache.phoenix.coprocessor.MetaDataEndpointImpl.PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS; +import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES; +import static org.junit.Assert.fail; + +@Category({NeedsOwnMiniClusterTest.class }) +public class InvalidateMetadataCacheIT extends BaseTest { +private static final Logger LOGGER = LoggerFactory.getLogger(InvalidateMetadataCacheIT.class); + +@BeforeClass +public static synchronized void doSetup() throws Exception { +NUM_SLAVES_BASE = 2; +Map props = Maps.newHashMapWithExpectedSize(1); +// to fail fast in case of exception. +props.put("hbase.client.retries.number", String.valueOf(0)); +props.put(REGIONSERVER_COPROCESSOR_CONF_KEY, +FailingPhoenixRegionServerEndpoint.class.getName()); +// Setting phoenix metadata cache invalidation timeout to a small number to fail fast. +props.put(PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS, String.valueOf(2000)); +setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator())); +} + +/** + * Add FailingPhoenixRegionServerEndpoint as regionserver co-processor. + * Make one of the regionserver sleep in invalidateServerMetadataCache method. This will trigger + * TimeoutException in MetadataEndpointImpl#invalidateServerMetadataCacheWithRetries method. + * Make sure that ALTER TABLE ADD COLUMN statement fails. + */ +@Test +public void testAddColumnWithTimeout() { +Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES); +String dataTableFullName = generateUniqueName(); +String ddl = getCreateTableStmt(dataTableFullName); +HRegionServer regionServerZero = utility.getMiniHBaseCluster().getRegionServer(0); +FailingPhoenixRegionServerEndpoint coprocForRS0 = +getFailingPhoenixRegionServerEndpoint(regionServerZero); +coprocForRS0.sleep(); +try(Connection conn = DriverManager.getConnection(getUrl(), props)) { +conn.createStatement().execute(ddl); Review Comment: So the CREATE TABLE ddl will not trigger cache invalidation ? > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770618#comment-17770618 ] ASF GitHub Bot commented on PHOENIX-6968: - tkhurana commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1341862818 ## phoenix-core/src/it/java/org/apache/phoenix/end2end/InvalidateMetadataCacheIT.java: ## @@ -0,0 +1,141 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.phoenix.end2end; + +import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.phoenix.query.BaseTest; +import org.apache.phoenix.thirdparty.com.google.common.collect.Maps; +import org.apache.phoenix.util.PropertiesUtil; +import org.apache.phoenix.util.ReadOnlyProps; +import org.junit.BeforeClass; +import org.junit.Test; +import org.junit.experimental.categories.Category; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.util.Map; +import java.util.Properties; + +import static org.apache.hadoop.hbase.coprocessor.CoprocessorHost.REGIONSERVER_COPROCESSOR_CONF_KEY; +import static org.apache.phoenix.coprocessor.MetaDataEndpointImpl.PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS; +import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES; +import static org.junit.Assert.fail; + +@Category({NeedsOwnMiniClusterTest.class }) +public class InvalidateMetadataCacheIT extends BaseTest { +private static final Logger LOGGER = LoggerFactory.getLogger(InvalidateMetadataCacheIT.class); + +@BeforeClass +public static synchronized void doSetup() throws Exception { +NUM_SLAVES_BASE = 2; +Map props = Maps.newHashMapWithExpectedSize(1); +// to fail fast in case of exception. +props.put("hbase.client.retries.number", String.valueOf(0)); +props.put(REGIONSERVER_COPROCESSOR_CONF_KEY, +FailingPhoenixRegionServerEndpoint.class.getName()); +// Setting phoenix metadata cache invalidation timeout to a small number to fail fast. +props.put(PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS, String.valueOf(2000)); +setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator())); +} + +/** + * Add FailingPhoenixRegionServerEndpoint as regionserver co-processor. + * Make one of the regionserver sleep in invalidateServerMetadataCache method. This will trigger + * TimeoutException in MetadataEndpointImpl#invalidateServerMetadataCacheWithRetries method. + * Make sure that ALTER TABLE ADD COLUMN statement fails. + */ +@Test +public void testAddColumnWithTimeout() { +Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES); +String dataTableFullName = generateUniqueName(); +String ddl = getCreateTableStmt(dataTableFullName); +HRegionServer regionServerZero = utility.getMiniHBaseCluster().getRegionServer(0); +FailingPhoenixRegionServerEndpoint coprocForRS0 = +getFailingPhoenixRegionServerEndpoint(regionServerZero); +coprocForRS0.sleep(); +try(Connection conn = DriverManager.getConnection(getUrl(), props)) { +conn.createStatement().execute(ddl); +conn.createStatement().execute("ALTER TABLE " + dataTableFullName + +" ADD CF.col2 integer"); +fail("Shouldn't reach heere"); +} catch (Exception e) { +LOGGER.error("Exception while adding column", e); +// This is expected +} +} + +/** + * Add FailingPhoenixRegionServerEndpoint as regionserver co-processor. + * Make one of the regionserver throw Exception in invalidateServerMetadataCache method. + * Make sure that ALTER TABLE ADD COLUMN statement fails. + */ +@Test +public void testAddColumnWithOneRSFailing() { +Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES); +String dataTableFullName = generateUniqueName(); +String ddl = getCreateTableStmt(dataTableFullName); +HRegionServer regionServerZero =
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770615#comment-17770615 ] ASF GitHub Bot commented on PHOENIX-6968: - tkhurana commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1341854164 ## phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java: ## @@ -3416,6 +3437,167 @@ private MetaDataMutationResult mutateColumn( } } +/** + * Invalidate metadata cache from all region servers for the given tenant and table name. + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @throws Throwable + */ +private void invalidateServerMetadataCache(byte[] tenantId, byte[]schemaName, +byte[] tableOrViewName) throws Throwable { +Configuration conf = env.getConfiguration(); +String value = conf.get(REGIONSERVER_COPROCESSOR_CONF_KEY); +if (value == null +|| !value.contains(PhoenixRegionServerEndpoint.class.getName())) { +// PhoenixRegionServerEndpoint is not loaded. We don't have to invalidate the cache. +LOGGER.info("Skip invalidating server metadata cache for tenantID: {}," ++ " schema name: {}, table Name: {} since PhoenixRegionServerEndpoint" ++ " is not loaded", Bytes.toString(tenantId), +Bytes.toString(schemaName), Bytes.toString(tableOrViewName)); +return; +} +Properties properties = new Properties(); +// Skip checking of system table existence since the system tables should have created +// by now. +properties.setProperty(SKIP_SYSTEM_TABLES_EXISTENCE_CHECK, "true"); +try (PhoenixConnection connection = QueryUtil.getConnectionOnServer(properties, +env.getConfiguration()).unwrap(PhoenixConnection.class); + Admin admin = connection.getQueryServices().getAdmin()) { +// This will incur an extra RPC to the master. This RPC is required since we want to +// get current list of regionservers. +Collection serverNames = admin.getRegionServers(true); +invalidateServerMetadataCacheWithRetries(admin, serverNames, tenantId, schemaName, +tableOrViewName, false); +} +} + +/** + * Invalidate metadata cache on all regionservers with retries for the given tenantID + * and tableName with retries. We retry once before failing the operation. + * + * @param admin + * @param serverNames + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @param retryAttempt + * @throws Throwable + */ +private void invalidateServerMetadataCacheWithRetries(Admin admin, +Collection serverNames, byte[] tenantId, byte[] schemaName, +byte[] tableOrViewName, boolean retryAttempt) throws Throwable { +String fullTableName = SchemaUtil.getTableName(schemaName, tableOrViewName); +String tenantIDStr = Bytes.toString(tenantId); +LOGGER.info("Invalidating metadata cache for tenantID: {}, tableName: {} for" ++ " region servers: {}, isRetryAttempt: {}", tenantIDStr, fullTableName, +serverNames, retryAttempt); +RegionServerEndpointProtos.InvalidateServerMetadataCacheRequest request = +getRequest(tenantId, schemaName, tableOrViewName); +// TODO Do I need my own executor or can I re-use QueryServices#Executor Review Comment: What is the threadpool size of the default executor ? Maybe it is better to create a new executor so that you can control its size > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further optimize by invalidating > only on RS that has that cache entry. > In PHOENIX-6988 we created PhoenixRegionServerEndpoint implementing > RegionServerCoprocessor. We can create a new method in this co-proc something > like invalidateCache(CacheEntry) to invalidate cache for a given > table/view/index. -- This message was
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770613#comment-17770613 ] ASF GitHub Bot commented on PHOENIX-6968: - tkhurana commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1341852356 ## phoenix-core/src/it/java/org/apache/phoenix/end2end/FailingPhoenixRegionServerEndpoint.java: ## @@ -0,0 +1,72 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.phoenix.end2end; + +import com.google.protobuf.RpcCallback; +import com.google.protobuf.RpcController; +import org.apache.phoenix.coprocessor.PhoenixRegionServerEndpoint; +import org.apache.phoenix.coprocessor.generated.RegionServerEndpointProtos; +import org.apache.phoenix.protobuf.ProtobufUtil; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; + +import static org.apache.phoenix.coprocessor.MetaDataEndpointImpl.PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS; +import static org.apache.phoenix.coprocessor.MetaDataEndpointImpl.PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS_DEFAULT; + +public class FailingPhoenixRegionServerEndpoint extends PhoenixRegionServerEndpoint { +private static final Logger LOGGER = LoggerFactory.getLogger(FailingPhoenixRegionServerEndpoint.class); + +private boolean throwException; +private boolean shouldSleep; + +@Override +public void invalidateServerMetadataCache(RpcController controller, +RegionServerEndpointProtos.InvalidateServerMetadataCacheRequest request, + RpcCallback done) { +long metadataCacheInvalidationTimeoutMs = conf.getLong( +PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS, +PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS_DEFAULT); + +if (throwException == true) { +IOException ioe = new IOException("On purpose"); +ProtobufUtil.setControllerException(controller, ioe); +return; +} else if (shouldSleep) { +try { +// Sleeping for 2 seconds more than metadataCacheInvalidationTimeoutMs. +Thread.sleep(metadataCacheInvalidationTimeoutMs + 2000); +} catch (InterruptedException e) { +LOGGER.warn("Exception while sleeping in FailingPhoenixRegionServerEndpoint", e); +} +} else { +LOGGER.info("Invalidating server metadata cache"); +} +} + +public void throwException() { +this.throwException = true; +this.shouldSleep = false; +} + +public void sleep() { Review Comment: maybe add another API to reset these states > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further optimize by invalidating > only on RS that has that cache entry. > In PHOENIX-6988 we created PhoenixRegionServerEndpoint implementing > RegionServerCoprocessor. We can create a new method in this co-proc something > like invalidateCache(CacheEntry) to invalidate cache for a given > table/view/index. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770612#comment-17770612 ] ASF GitHub Bot commented on PHOENIX-6968: - tkhurana commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1341852100 ## phoenix-core/src/it/java/org/apache/phoenix/end2end/InvalidateMetadataCacheIT.java: ## @@ -0,0 +1,141 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.phoenix.end2end; + +import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.phoenix.query.BaseTest; +import org.apache.phoenix.thirdparty.com.google.common.collect.Maps; +import org.apache.phoenix.util.PropertiesUtil; +import org.apache.phoenix.util.ReadOnlyProps; +import org.junit.BeforeClass; +import org.junit.Test; +import org.junit.experimental.categories.Category; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.util.Map; +import java.util.Properties; + +import static org.apache.hadoop.hbase.coprocessor.CoprocessorHost.REGIONSERVER_COPROCESSOR_CONF_KEY; +import static org.apache.phoenix.coprocessor.MetaDataEndpointImpl.PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS; +import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES; +import static org.junit.Assert.fail; + +@Category({NeedsOwnMiniClusterTest.class }) +public class InvalidateMetadataCacheIT extends BaseTest { +private static final Logger LOGGER = LoggerFactory.getLogger(InvalidateMetadataCacheIT.class); + +@BeforeClass +public static synchronized void doSetup() throws Exception { +NUM_SLAVES_BASE = 2; +Map props = Maps.newHashMapWithExpectedSize(1); +// to fail fast in case of exception. +props.put("hbase.client.retries.number", String.valueOf(0)); +props.put(REGIONSERVER_COPROCESSOR_CONF_KEY, +FailingPhoenixRegionServerEndpoint.class.getName()); +// Setting phoenix metadata cache invalidation timeout to a small number to fail fast. +props.put(PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS, String.valueOf(2000)); +setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator())); +} + +/** + * Add FailingPhoenixRegionServerEndpoint as regionserver co-processor. + * Make one of the regionserver sleep in invalidateServerMetadataCache method. This will trigger + * TimeoutException in MetadataEndpointImpl#invalidateServerMetadataCacheWithRetries method. + * Make sure that ALTER TABLE ADD COLUMN statement fails. + */ +@Test +public void testAddColumnWithTimeout() { +Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES); +String dataTableFullName = generateUniqueName(); +String ddl = getCreateTableStmt(dataTableFullName); +HRegionServer regionServerZero = utility.getMiniHBaseCluster().getRegionServer(0); +FailingPhoenixRegionServerEndpoint coprocForRS0 = +getFailingPhoenixRegionServerEndpoint(regionServerZero); +coprocForRS0.sleep(); +try(Connection conn = DriverManager.getConnection(getUrl(), props)) { +conn.createStatement().execute(ddl); +conn.createStatement().execute("ALTER TABLE " + dataTableFullName + +" ADD CF.col2 integer"); +fail("Shouldn't reach heere"); Review Comment: typo heere > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770611#comment-17770611 ] ASF GitHub Bot commented on PHOENIX-6968: - tkhurana commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1341851464 ## phoenix-core/src/it/java/org/apache/phoenix/end2end/InvalidateMetadataCacheIT.java: ## @@ -0,0 +1,141 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.phoenix.end2end; + +import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.phoenix.query.BaseTest; +import org.apache.phoenix.thirdparty.com.google.common.collect.Maps; +import org.apache.phoenix.util.PropertiesUtil; +import org.apache.phoenix.util.ReadOnlyProps; +import org.junit.BeforeClass; +import org.junit.Test; +import org.junit.experimental.categories.Category; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.util.Map; +import java.util.Properties; + +import static org.apache.hadoop.hbase.coprocessor.CoprocessorHost.REGIONSERVER_COPROCESSOR_CONF_KEY; +import static org.apache.phoenix.coprocessor.MetaDataEndpointImpl.PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS; +import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES; +import static org.junit.Assert.fail; + +@Category({NeedsOwnMiniClusterTest.class }) +public class InvalidateMetadataCacheIT extends BaseTest { +private static final Logger LOGGER = LoggerFactory.getLogger(InvalidateMetadataCacheIT.class); + +@BeforeClass +public static synchronized void doSetup() throws Exception { +NUM_SLAVES_BASE = 2; +Map props = Maps.newHashMapWithExpectedSize(1); +// to fail fast in case of exception. +props.put("hbase.client.retries.number", String.valueOf(0)); +props.put(REGIONSERVER_COPROCESSOR_CONF_KEY, +FailingPhoenixRegionServerEndpoint.class.getName()); +// Setting phoenix metadata cache invalidation timeout to a small number to fail fast. +props.put(PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS, String.valueOf(2000)); +setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator())); +} + +/** + * Add FailingPhoenixRegionServerEndpoint as regionserver co-processor. + * Make one of the regionserver sleep in invalidateServerMetadataCache method. This will trigger + * TimeoutException in MetadataEndpointImpl#invalidateServerMetadataCacheWithRetries method. + * Make sure that ALTER TABLE ADD COLUMN statement fails. + */ +@Test +public void testAddColumnWithTimeout() { +Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES); +String dataTableFullName = generateUniqueName(); +String ddl = getCreateTableStmt(dataTableFullName); +HRegionServer regionServerZero = utility.getMiniHBaseCluster().getRegionServer(0); +FailingPhoenixRegionServerEndpoint coprocForRS0 = +getFailingPhoenixRegionServerEndpoint(regionServerZero); +coprocForRS0.sleep(); +try(Connection conn = DriverManager.getConnection(getUrl(), props)) { +conn.createStatement().execute(ddl); Review Comment: So the CREATE TABLE ddl will not trigger cache invalidation ? > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770607#comment-17770607 ] ASF GitHub Bot commented on PHOENIX-6968: - tkhurana commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1341847213 ## phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java: ## @@ -3416,6 +3437,167 @@ private MetaDataMutationResult mutateColumn( } } +/** + * Invalidate metadata cache from all region servers for the given tenant and table name. + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @throws Throwable + */ +private void invalidateServerMetadataCache(byte[] tenantId, byte[]schemaName, +byte[] tableOrViewName) throws Throwable { +Configuration conf = env.getConfiguration(); +String value = conf.get(REGIONSERVER_COPROCESSOR_CONF_KEY); +if (value == null +|| !value.contains(PhoenixRegionServerEndpoint.class.getName())) { +// PhoenixRegionServerEndpoint is not loaded. We don't have to invalidate the cache. +LOGGER.info("Skip invalidating server metadata cache for tenantID: {}," ++ " schema name: {}, table Name: {} since PhoenixRegionServerEndpoint" ++ " is not loaded", Bytes.toString(tenantId), +Bytes.toString(schemaName), Bytes.toString(tableOrViewName)); +return; +} +Properties properties = new Properties(); +// Skip checking of system table existence since the system tables should have created +// by now. +properties.setProperty(SKIP_SYSTEM_TABLES_EXISTENCE_CHECK, "true"); +try (PhoenixConnection connection = QueryUtil.getConnectionOnServer(properties, +env.getConfiguration()).unwrap(PhoenixConnection.class); + Admin admin = connection.getQueryServices().getAdmin()) { +// This will incur an extra RPC to the master. This RPC is required since we want to +// get current list of regionservers. +Collection serverNames = admin.getRegionServers(true); +invalidateServerMetadataCacheWithRetries(admin, serverNames, tenantId, schemaName, +tableOrViewName, false); +} +} + +/** + * Invalidate metadata cache on all regionservers with retries for the given tenantID + * and tableName with retries. We retry once before failing the operation. + * + * @param admin + * @param serverNames + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @param retryAttempt + * @throws Throwable + */ +private void invalidateServerMetadataCacheWithRetries(Admin admin, +Collection serverNames, byte[] tenantId, byte[] schemaName, +byte[] tableOrViewName, boolean retryAttempt) throws Throwable { Review Comment: better to rename the variable to `isRetry` which signifies its a boolean rather than retryAttempt which denotes a counter > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further optimize by invalidating > only on RS that has that cache entry. > In PHOENIX-6988 we created PhoenixRegionServerEndpoint implementing > RegionServerCoprocessor. We can create a new method in this co-proc something > like invalidateCache(CacheEntry) to invalidate cache for a given > table/view/index. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770608#comment-17770608 ] ASF GitHub Bot commented on PHOENIX-6968: - tkhurana commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1341847213 ## phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java: ## @@ -3416,6 +3437,167 @@ private MetaDataMutationResult mutateColumn( } } +/** + * Invalidate metadata cache from all region servers for the given tenant and table name. + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @throws Throwable + */ +private void invalidateServerMetadataCache(byte[] tenantId, byte[]schemaName, +byte[] tableOrViewName) throws Throwable { +Configuration conf = env.getConfiguration(); +String value = conf.get(REGIONSERVER_COPROCESSOR_CONF_KEY); +if (value == null +|| !value.contains(PhoenixRegionServerEndpoint.class.getName())) { +// PhoenixRegionServerEndpoint is not loaded. We don't have to invalidate the cache. +LOGGER.info("Skip invalidating server metadata cache for tenantID: {}," ++ " schema name: {}, table Name: {} since PhoenixRegionServerEndpoint" ++ " is not loaded", Bytes.toString(tenantId), +Bytes.toString(schemaName), Bytes.toString(tableOrViewName)); +return; +} +Properties properties = new Properties(); +// Skip checking of system table existence since the system tables should have created +// by now. +properties.setProperty(SKIP_SYSTEM_TABLES_EXISTENCE_CHECK, "true"); +try (PhoenixConnection connection = QueryUtil.getConnectionOnServer(properties, +env.getConfiguration()).unwrap(PhoenixConnection.class); + Admin admin = connection.getQueryServices().getAdmin()) { +// This will incur an extra RPC to the master. This RPC is required since we want to +// get current list of regionservers. +Collection serverNames = admin.getRegionServers(true); +invalidateServerMetadataCacheWithRetries(admin, serverNames, tenantId, schemaName, +tableOrViewName, false); +} +} + +/** + * Invalidate metadata cache on all regionservers with retries for the given tenantID + * and tableName with retries. We retry once before failing the operation. + * + * @param admin + * @param serverNames + * @param tenantId + * @param schemaName + * @param tableOrViewName + * @param retryAttempt + * @throws Throwable + */ +private void invalidateServerMetadataCacheWithRetries(Admin admin, +Collection serverNames, byte[] tenantId, byte[] schemaName, +byte[] tableOrViewName, boolean retryAttempt) throws Throwable { Review Comment: better to rename the variable to `isRetry` which signifies its a boolean rather than `retryAttempt` which denotes a counter > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further optimize by invalidating > only on RS that has that cache entry. > In PHOENIX-6988 we created PhoenixRegionServerEndpoint implementing > RegionServerCoprocessor. We can create a new method in this co-proc something > like invalidateCache(CacheEntry) to invalidate cache for a given > table/view/index. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770538#comment-17770538 ] ASF GitHub Bot commented on PHOENIX-6968: - shahrs87 commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1341640559 ## phoenix-core/src/it/java/org/apache/phoenix/end2end/InvalidateMetadataCacheIT.java: ## @@ -0,0 +1,141 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.phoenix.end2end; + +import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.phoenix.query.BaseTest; +import org.apache.phoenix.thirdparty.com.google.common.collect.Maps; +import org.apache.phoenix.util.PropertiesUtil; +import org.apache.phoenix.util.ReadOnlyProps; +import org.junit.BeforeClass; +import org.junit.Test; +import org.junit.experimental.categories.Category; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.util.Map; +import java.util.Properties; + +import static org.apache.hadoop.hbase.coprocessor.CoprocessorHost.REGIONSERVER_COPROCESSOR_CONF_KEY; +import static org.apache.phoenix.coprocessor.MetaDataEndpointImpl.PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS; +import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES; +import static org.junit.Assert.fail; + +@Category({NeedsOwnMiniClusterTest.class }) +public class InvalidateMetadataCacheIT extends BaseTest { +private static final Logger LOGGER = LoggerFactory.getLogger(InvalidateMetadataCacheIT.class); + +@BeforeClass +public static synchronized void doSetup() throws Exception { +NUM_SLAVES_BASE = 2; +Map props = Maps.newHashMapWithExpectedSize(1); +// to fail fast in case of exception. +props.put("hbase.client.retries.number", String.valueOf(0)); +props.put(REGIONSERVER_COPROCESSOR_CONF_KEY, +FailingPhoenixRegionServerEndpoint.class.getName()); +// Setting phoenix metadata cache invalidation timeout to a small number to fail fast. +props.put(PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS, String.valueOf(2000)); +setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator())); +} + +/** + * Add FailingPhoenixRegionServerEndpoint as regionserver co-processor. + * Make one of the regionserver sleep in invalidateServerMetadataCache method. This will trigger + * TimeoutException in MetadataEndpointImpl#invalidateServerMetadataCacheWithRetries method. + * Make sure that ALTER TABLE ADD COLUMN statement fails. + */ +@Test +public void testAddColumnWithTimeout() { +Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES); +String dataTableFullName = generateUniqueName(); +String ddl = getCreateTableStmt(dataTableFullName); +HRegionServer regionServerZero = utility.getMiniHBaseCluster().getRegionServer(0); +FailingPhoenixRegionServerEndpoint coprocForRS0 = +getFailingPhoenixRegionServerEndpoint(regionServerZero); +coprocForRS0.sleep(); +try(Connection conn = DriverManager.getConnection(getUrl(), props)) { +conn.createStatement().execute(ddl); +conn.createStatement().execute("ALTER TABLE " + dataTableFullName + +" ADD CF.col2 integer"); +fail("Shouldn't reach heere"); +} catch (Exception e) { +LOGGER.error("Exception while adding column", e); +// This is expected Review Comment: I thought that is kind of implicit. There is an `assert fail` statement in the try block and if it throws anything other than Exception, the test will fail. So it has to go in `catch (Exception)` block for the test to succeed. ## phoenix-core/src/it/java/org/apache/phoenix/end2end/FailingPhoenixRegionServerEndpoint.java: ## @@ -0,0 +1,72 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770524#comment-17770524 ] ASF GitHub Bot commented on PHOENIX-6968: - ranganathg commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1341607601 ## phoenix-core/src/it/java/org/apache/phoenix/end2end/InvalidateMetadataCacheIT.java: ## @@ -0,0 +1,141 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.phoenix.end2end; + +import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.phoenix.query.BaseTest; +import org.apache.phoenix.thirdparty.com.google.common.collect.Maps; +import org.apache.phoenix.util.PropertiesUtil; +import org.apache.phoenix.util.ReadOnlyProps; +import org.junit.BeforeClass; +import org.junit.Test; +import org.junit.experimental.categories.Category; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.util.Map; +import java.util.Properties; + +import static org.apache.hadoop.hbase.coprocessor.CoprocessorHost.REGIONSERVER_COPROCESSOR_CONF_KEY; +import static org.apache.phoenix.coprocessor.MetaDataEndpointImpl.PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS; +import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES; +import static org.junit.Assert.fail; + +@Category({NeedsOwnMiniClusterTest.class }) +public class InvalidateMetadataCacheIT extends BaseTest { +private static final Logger LOGGER = LoggerFactory.getLogger(InvalidateMetadataCacheIT.class); + +@BeforeClass +public static synchronized void doSetup() throws Exception { +NUM_SLAVES_BASE = 2; +Map props = Maps.newHashMapWithExpectedSize(1); +// to fail fast in case of exception. +props.put("hbase.client.retries.number", String.valueOf(0)); +props.put(REGIONSERVER_COPROCESSOR_CONF_KEY, +FailingPhoenixRegionServerEndpoint.class.getName()); +// Setting phoenix metadata cache invalidation timeout to a small number to fail fast. +props.put(PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS, String.valueOf(2000)); +setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator())); +} + +/** + * Add FailingPhoenixRegionServerEndpoint as regionserver co-processor. + * Make one of the regionserver sleep in invalidateServerMetadataCache method. This will trigger + * TimeoutException in MetadataEndpointImpl#invalidateServerMetadataCacheWithRetries method. + * Make sure that ALTER TABLE ADD COLUMN statement fails. + */ +@Test +public void testAddColumnWithTimeout() { +Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES); +String dataTableFullName = generateUniqueName(); +String ddl = getCreateTableStmt(dataTableFullName); +HRegionServer regionServerZero = utility.getMiniHBaseCluster().getRegionServer(0); +FailingPhoenixRegionServerEndpoint coprocForRS0 = +getFailingPhoenixRegionServerEndpoint(regionServerZero); +coprocForRS0.sleep(); +try(Connection conn = DriverManager.getConnection(getUrl(), props)) { +conn.createStatement().execute(ddl); +conn.createStatement().execute("ALTER TABLE " + dataTableFullName + +" ADD CF.col2 integer"); +fail("Shouldn't reach heere"); +} catch (Exception e) { +LOGGER.error("Exception while adding column", e); +// This is expected Review Comment: should we add assertTrue in this exception - kind of like a completion of assertion (Act - Arrange - Assert)? similarly below? > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > >
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770525#comment-17770525 ] ASF GitHub Bot commented on PHOENIX-6968: - ranganathg commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1341607601 ## phoenix-core/src/it/java/org/apache/phoenix/end2end/InvalidateMetadataCacheIT.java: ## @@ -0,0 +1,141 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.phoenix.end2end; + +import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.phoenix.query.BaseTest; +import org.apache.phoenix.thirdparty.com.google.common.collect.Maps; +import org.apache.phoenix.util.PropertiesUtil; +import org.apache.phoenix.util.ReadOnlyProps; +import org.junit.BeforeClass; +import org.junit.Test; +import org.junit.experimental.categories.Category; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.util.Map; +import java.util.Properties; + +import static org.apache.hadoop.hbase.coprocessor.CoprocessorHost.REGIONSERVER_COPROCESSOR_CONF_KEY; +import static org.apache.phoenix.coprocessor.MetaDataEndpointImpl.PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS; +import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES; +import static org.junit.Assert.fail; + +@Category({NeedsOwnMiniClusterTest.class }) +public class InvalidateMetadataCacheIT extends BaseTest { +private static final Logger LOGGER = LoggerFactory.getLogger(InvalidateMetadataCacheIT.class); + +@BeforeClass +public static synchronized void doSetup() throws Exception { +NUM_SLAVES_BASE = 2; +Map props = Maps.newHashMapWithExpectedSize(1); +// to fail fast in case of exception. +props.put("hbase.client.retries.number", String.valueOf(0)); +props.put(REGIONSERVER_COPROCESSOR_CONF_KEY, +FailingPhoenixRegionServerEndpoint.class.getName()); +// Setting phoenix metadata cache invalidation timeout to a small number to fail fast. +props.put(PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS, String.valueOf(2000)); +setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator())); +} + +/** + * Add FailingPhoenixRegionServerEndpoint as regionserver co-processor. + * Make one of the regionserver sleep in invalidateServerMetadataCache method. This will trigger + * TimeoutException in MetadataEndpointImpl#invalidateServerMetadataCacheWithRetries method. + * Make sure that ALTER TABLE ADD COLUMN statement fails. + */ +@Test +public void testAddColumnWithTimeout() { +Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES); +String dataTableFullName = generateUniqueName(); +String ddl = getCreateTableStmt(dataTableFullName); +HRegionServer regionServerZero = utility.getMiniHBaseCluster().getRegionServer(0); +FailingPhoenixRegionServerEndpoint coprocForRS0 = +getFailingPhoenixRegionServerEndpoint(regionServerZero); +coprocForRS0.sleep(); +try(Connection conn = DriverManager.getConnection(getUrl(), props)) { +conn.createStatement().execute(ddl); +conn.createStatement().execute("ALTER TABLE " + dataTableFullName + +" ADD CF.col2 integer"); +fail("Shouldn't reach heere"); +} catch (Exception e) { +LOGGER.error("Exception while adding column", e); +// This is expected Review Comment: should we add assertTrue in this exception - kind of like a completion of assertion (Arrange - Act - Assert)? similarly below? > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > >
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770523#comment-17770523 ] ASF GitHub Bot commented on PHOENIX-6968: - ranganathg commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1341622874 ## phoenix-core/src/it/java/org/apache/phoenix/end2end/FailingPhoenixRegionServerEndpoint.java: ## @@ -0,0 +1,72 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.phoenix.end2end; + +import com.google.protobuf.RpcCallback; +import com.google.protobuf.RpcController; +import org.apache.phoenix.coprocessor.PhoenixRegionServerEndpoint; +import org.apache.phoenix.coprocessor.generated.RegionServerEndpointProtos; +import org.apache.phoenix.protobuf.ProtobufUtil; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; + +import static org.apache.phoenix.coprocessor.MetaDataEndpointImpl.PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS; +import static org.apache.phoenix.coprocessor.MetaDataEndpointImpl.PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS_DEFAULT; + +public class FailingPhoenixRegionServerEndpoint extends PhoenixRegionServerEndpoint { +private static final Logger LOGGER = LoggerFactory.getLogger(FailingPhoenixRegionServerEndpoint.class); + +private boolean throwException; +private boolean shouldSleep; + +@Override +public void invalidateServerMetadataCache(RpcController controller, +RegionServerEndpointProtos.InvalidateServerMetadataCacheRequest request, + RpcCallback done) { +long metadataCacheInvalidationTimeoutMs = conf.getLong( +PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS, +PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS_DEFAULT); + +if (throwException == true) { +IOException ioe = new IOException("On purpose"); +ProtobufUtil.setControllerException(controller, ioe); +return; +} else if (shouldSleep) { +try { +// Sleeping for 2 seconds more than metadataCacheInvalidationTimeoutMs. +Thread.sleep(metadataCacheInvalidationTimeoutMs + 2000); +} catch (InterruptedException e) { +LOGGER.warn("Exception while sleeping in FailingPhoenixRegionServerEndpoint", e); +} +} else { +LOGGER.info("Invalidating server metadata cache"); Review Comment: should there be a call to invalidateServerMetadataCache() in the base class? how does the cache get invalidated without any call? > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further optimize by invalidating > only on RS that has that cache entry. > In PHOENIX-6988 we created PhoenixRegionServerEndpoint implementing > RegionServerCoprocessor. We can create a new method in this co-proc something > like invalidateCache(CacheEntry) to invalidate cache for a given > table/view/index. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770520#comment-17770520 ] ASF GitHub Bot commented on PHOENIX-6968: - ranganathg commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1341622874 ## phoenix-core/src/it/java/org/apache/phoenix/end2end/FailingPhoenixRegionServerEndpoint.java: ## @@ -0,0 +1,72 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.phoenix.end2end; + +import com.google.protobuf.RpcCallback; +import com.google.protobuf.RpcController; +import org.apache.phoenix.coprocessor.PhoenixRegionServerEndpoint; +import org.apache.phoenix.coprocessor.generated.RegionServerEndpointProtos; +import org.apache.phoenix.protobuf.ProtobufUtil; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; + +import static org.apache.phoenix.coprocessor.MetaDataEndpointImpl.PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS; +import static org.apache.phoenix.coprocessor.MetaDataEndpointImpl.PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS_DEFAULT; + +public class FailingPhoenixRegionServerEndpoint extends PhoenixRegionServerEndpoint { +private static final Logger LOGGER = LoggerFactory.getLogger(FailingPhoenixRegionServerEndpoint.class); + +private boolean throwException; +private boolean shouldSleep; + +@Override +public void invalidateServerMetadataCache(RpcController controller, +RegionServerEndpointProtos.InvalidateServerMetadataCacheRequest request, + RpcCallback done) { +long metadataCacheInvalidationTimeoutMs = conf.getLong( +PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS, +PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS_DEFAULT); + +if (throwException == true) { +IOException ioe = new IOException("On purpose"); +ProtobufUtil.setControllerException(controller, ioe); +return; +} else if (shouldSleep) { +try { +// Sleeping for 2 seconds more than metadataCacheInvalidationTimeoutMs. +Thread.sleep(metadataCacheInvalidationTimeoutMs + 2000); +} catch (InterruptedException e) { +LOGGER.warn("Exception while sleeping in FailingPhoenixRegionServerEndpoint", e); +} +} else { +LOGGER.info("Invalidating server metadata cache"); Review Comment: should there be a call to invalidateServerMetadataCache()? how does the cache get invalidated without any call? > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further optimize by invalidating > only on RS that has that cache entry. > In PHOENIX-6988 we created PhoenixRegionServerEndpoint implementing > RegionServerCoprocessor. We can create a new method in this co-proc something > like invalidateCache(CacheEntry) to invalidate cache for a given > table/view/index. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770516#comment-17770516 ] ASF GitHub Bot commented on PHOENIX-6968: - ranganathg commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1341607601 ## phoenix-core/src/it/java/org/apache/phoenix/end2end/InvalidateMetadataCacheIT.java: ## @@ -0,0 +1,141 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.phoenix.end2end; + +import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.phoenix.query.BaseTest; +import org.apache.phoenix.thirdparty.com.google.common.collect.Maps; +import org.apache.phoenix.util.PropertiesUtil; +import org.apache.phoenix.util.ReadOnlyProps; +import org.junit.BeforeClass; +import org.junit.Test; +import org.junit.experimental.categories.Category; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.sql.Connection; +import java.sql.DriverManager; +import java.util.Map; +import java.util.Properties; + +import static org.apache.hadoop.hbase.coprocessor.CoprocessorHost.REGIONSERVER_COPROCESSOR_CONF_KEY; +import static org.apache.phoenix.coprocessor.MetaDataEndpointImpl.PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS; +import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES; +import static org.junit.Assert.fail; + +@Category({NeedsOwnMiniClusterTest.class }) +public class InvalidateMetadataCacheIT extends BaseTest { +private static final Logger LOGGER = LoggerFactory.getLogger(InvalidateMetadataCacheIT.class); + +@BeforeClass +public static synchronized void doSetup() throws Exception { +NUM_SLAVES_BASE = 2; +Map props = Maps.newHashMapWithExpectedSize(1); +// to fail fast in case of exception. +props.put("hbase.client.retries.number", String.valueOf(0)); +props.put(REGIONSERVER_COPROCESSOR_CONF_KEY, +FailingPhoenixRegionServerEndpoint.class.getName()); +// Setting phoenix metadata cache invalidation timeout to a small number to fail fast. +props.put(PHOENIX_METADATA_CACHE_INVALIDATION_TIMEOUT_MS, String.valueOf(2000)); +setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator())); +} + +/** + * Add FailingPhoenixRegionServerEndpoint as regionserver co-processor. + * Make one of the regionserver sleep in invalidateServerMetadataCache method. This will trigger + * TimeoutException in MetadataEndpointImpl#invalidateServerMetadataCacheWithRetries method. + * Make sure that ALTER TABLE ADD COLUMN statement fails. + */ +@Test +public void testAddColumnWithTimeout() { +Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES); +String dataTableFullName = generateUniqueName(); +String ddl = getCreateTableStmt(dataTableFullName); +HRegionServer regionServerZero = utility.getMiniHBaseCluster().getRegionServer(0); +FailingPhoenixRegionServerEndpoint coprocForRS0 = +getFailingPhoenixRegionServerEndpoint(regionServerZero); +coprocForRS0.sleep(); +try(Connection conn = DriverManager.getConnection(getUrl(), props)) { +conn.createStatement().execute(ddl); +conn.createStatement().execute("ALTER TABLE " + dataTableFullName + +" ADD CF.col2 integer"); +fail("Shouldn't reach heere"); +} catch (Exception e) { +LOGGER.error("Exception while adding column", e); +// This is expected Review Comment: should we add assertTrue in this exception? similarly below? > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17770507#comment-17770507 ] ASF GitHub Bot commented on PHOENIX-6968: - shahrs87 commented on PR #1691: URL: https://github.com/apache/phoenix/pull/1691#issuecomment-1741177954 @stoty @tkhurana @virajjasani This PR is ready for review now. Can you please help review? Thank you. > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further optimize by invalidating > only on RS that has that cache entry. > In PHOENIX-6988 we created PhoenixRegionServerEndpoint implementing > RegionServerCoprocessor. We can create a new method in this co-proc something > like invalidateCache(CacheEntry) to invalidate cache for a given > table/view/index. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17769334#comment-17769334 ] ASF GitHub Bot commented on PHOENIX-6968: - shahrs87 commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1337725512 ## phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java: ## @@ -418,7 +420,7 @@ public abstract class BaseTest { } private static final String ORG_ID = "00D3XHP"; -protected static int NUM_SLAVES_BASE = 1; +protected static int NUM_SLAVES_BASE = 3; Review Comment: @stoty This PR is not ready for review yet. I was just trying to run all the tests to see which ones fails in the first pass. I should have labelled the PR as Draft PR. Sorry for the confusion. > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further optimize by invalidating > only on RS that has that cache entry. > In PHOENIX-6988 we created PhoenixRegionServerEndpoint implementing > RegionServerCoprocessor. We can create a new method in this co-proc something > like invalidateCache(CacheEntry) to invalidate cache for a given > table/view/index. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17769318#comment-17769318 ] ASF GitHub Bot commented on PHOENIX-6968: - stoty commented on code in PR #1691: URL: https://github.com/apache/phoenix/pull/1691#discussion_r1337659564 ## phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java: ## @@ -418,7 +420,7 @@ public abstract class BaseTest { } private static final String ORG_ID = "00D3XHP"; -protected static int NUM_SLAVES_BASE = 1; +protected static int NUM_SLAVES_BASE = 3; Review Comment: This doesn't look like a good idea. It would be better to override this on the tests where it is needed > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further optimize by invalidating > only on RS that has that cache entry. > In PHOENIX-6988 we created PhoenixRegionServerEndpoint implementing > RegionServerCoprocessor. We can create a new method in this co-proc something > like invalidateCache(CacheEntry) to invalidate cache for a given > table/view/index. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (PHOENIX-6968) Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache.
[ https://issues.apache.org/jira/browse/PHOENIX-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17768871#comment-17768871 ] ASF GitHub Bot commented on PHOENIX-6968: - shahrs87 opened a new pull request, #1691: URL: https://github.com/apache/phoenix/pull/1691 (no comment) > Create PhoenixRegionServerEndpoint#invalidateCache method to invalidate cache. > -- > > Key: PHOENIX-6968 > URL: https://issues.apache.org/jira/browse/PHOENIX-6968 > Project: Phoenix > Issue Type: Sub-task >Reporter: Rushabh Shah >Assignee: Rushabh Shah >Priority: Major > > Whenever we update metadata (like alter table add column, drop table), we > need to invalidate metadata cache entry (introduced by PHOENIX-6943) on all > the regionservers which has that cache entry. First step would be to issue an > invalidate command on all the regionservers irrespective of whether that > regionserver has the cache entry. We can further optimize by invalidating > only on RS that has that cache entry. > In PHOENIX-6988 we created PhoenixRegionServerEndpoint implementing > RegionServerCoprocessor. We can create a new method in this co-proc something > like invalidateCache(CacheEntry) to invalidate cache for a given > table/view/index. -- This message was sent by Atlassian Jira (v8.20.10#820010)