[ 
https://issues.apache.org/jira/browse/PHOENIX-7281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17828118#comment-17828118
 ] 

Rushabh Shah commented on PHOENIX-7281:
---------------------------------------

[Jenkins 
build|https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1778/59/testReport/org.apache.phoenix.end2end/LogicalTableNameExtendedIT/testUpdatePhysicalTableName_tenantViews/]
Steps to reproduce:
 # Create phoenix (logical) table with name {{TEST_ENTITY.T_000001}} with hbase 
(physical table) name: {{TEST_ENTITY:T_000001}} (NamespaceEnabled is set to 
true in the test)
 # Create a global view on the base table {{TEST_ENTITY.GV_000001}}
 # Create index on global view: {{TEST_ENTITY.IDX_GV_000001}}
 # Create tenant view on the global view: {{TEST_ENTITY.ECZ}} (tenantID: 
{{{}00D0t0000000001{}}})
 # Creates snapshot {{T_000001-Snapshot}} for table {{TEST_ENTITY:T_000001}}
 # Creates new table {{TEST_ENTITY:NEW_TBL_T_000001}} by cloning contents from 
snapshot: {{T_000001-Snapshot}}
 # Change the physical table name from {{T_000001}} to {{NEW_TBL_T_000001}} for 
logical table {{T_000001}} using the following sql statement
{{UPSERT INTO SYSTEM.CATALOG (TENANT_ID, TABLE_SCHEM, TABLE_NAME, COLUMN_NAME, 
COLUMN_FAMILY, PHYSICAL_TABLE_NAME) VALUES (null, 'TEST_ENTITY', 'T_000001', 
NULL, NULL, 'NEW_TBL_T_000001')}}
This is a plain SQL statement. This doesn’t update the LAST_DDL_TIMESTAMP for 
the logical table {{TEST_ENTITY.T_000001. }}
Also this doesn’t invalidate the cache on regionserver for logical table 
{{TEST_ENTITY.T_000001}}
 # Drops the old physical table: {{TEST_ENTITY:T_000001}}
 # Upsert 1 row into tenant view {{TEST_ENTITY.ECZ}}
UPSERT INTO TEST_ENTITY.ECZ (ID,ZID,COL4,COL5,COL6,COL7,COL8,COL9) 
VALUES(?,?,?,?,?,?,?,?)
 # For the above upsert query, last ddl timestamp was validated for 
TEST_ENTITY.T_000001, TEST_ENTITY.GV_000001, 00D0t0000000001/TEST_ENTITY.ECZ, 
TEST_ENTITY.IDX_GV_000001 but all of them succeeded.
 # The upsert query failed with following exception

{noformat}
2024-03-18T14:51:27,907 DEBUG [Listener at localhost/59151] 
cache.ServerCacheClient(285): \{TenantId=00D0t0000000001} Adding cache entry to 
be sent for 
region=TEST_ENTITY:T_000001,,1710798677490.b9746c12cd5bd5005b3c4a7404e40493., 
hostname=localhost,59159,1710798639870, seqNum=2
2024-03-18T14:51:27,921 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=59159] ipc.CallRunner(138): 
callId: 423 service: ClientService methodName: ExecService size: 490 
connection: 127.0.0.1:59183 deadline: 1710799887920, 
exception=org.apache.hadoop.hbase.NotServingRegionException: 
TEST_ENTITY:T_000001,,1710798677490.b9746c12cd5bd5005b3c4a7404e40493. is not 
online on localhost,59159,1710798639870
2024-03-18T14:51:28,029 WARN [hconnection-0x605c7a9e-shared-pool-4] 
client.SyncCoprocessorRpcChannel(49): Call failed on IOException
org.apache.hadoop.hbase.TableNotFoundException: TEST_ENTITY:T_000001
at 
org.apache.hadoop.hbase.client.ConnectionImplementation.getTableState(ConnectionImplementation.java:2237)
 ~[hbase-client-2.5.7-hadoop3.jar:2.5.7-hadoop3]
at 
org.apache.hadoop.hbase.client.ConnectionImplementation.isTableDisabled(ConnectionImplementation.java:742)
 ~[hbase-client-2.5.7-hadoop3.jar:2.5.7-hadoop3]
at 
org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:214)
 ~[hbase-client-2.5.7-hadoop3.jar:2.5.7-hadoop3]
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:101)
 ~[hbase-client-2.5.7-hadoop3.jar:2.5.7-hadoop3]
at 
org.apache.hadoop.hbase.client.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:93)
 ~[hbase-client-2.5.7-hadoop3.jar:2.5.7-hadoop3]
at 
org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callMethod(SyncCoprocessorRpcChannel.java:47)
 ~[hbase-client-2.5.7-hadoop3.jar:2.5.7-hadoop3]
at 
org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService$Stub.addServerCache(ServerCachingProtos.java:13685)
 ~[classes/:?]
at 
org.apache.phoenix.cache.ServerCacheClient$3.call(ServerCacheClient.java:541) 
~[classes/:?]
at 
org.apache.phoenix.cache.ServerCacheClient$3.call(ServerCacheClient.java:536) 
~[classes/:?]
at org.apache.hadoop.hbase.client.HTable.lambda$null$22(HTable.java:1140) 
~[hbase-client-2.5.7-hadoop3.jar:2.5.7-hadoop3]
at io.opentelemetry.context.Context.lambda$wrap$2(Context.java:224) 
~[opentelemetry-context-1.15.0.jar:1.15.0]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_292]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[?:1.8.0_292]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[?:1.8.0_292]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_292]
{noformat}
This is another case of test running plain upsert query changing the metadata 
of the table without updating the LAST_DDL_TIMESTAMP and invalidating the cache 
on regionserver.

Probable solutions:
 # Change the upsert query from step#7 to run metadata RPC which will do the 
following things.
 # Grab the lock on the table.
 # Invalidate the cache on all regionservers.
 # Update the physical table name to new table.
 # Update the LAST_DDL_TIMESTAMP of the table.
 # Release the lock.

> Test failure 
> LogicalTableNameExtendedIT#testUpdatePhysicalTableName_tenantViews
> -------------------------------------------------------------------------------
>
>                 Key: PHOENIX-7281
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-7281
>             Project: Phoenix
>          Issue Type: Sub-task
>            Reporter: Rushabh Shah
>            Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to