[
https://issues.apache.org/jira/browse/PHOENIX-6758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Viraj Jasani resolved PHOENIX-6758.
-----------------------------------
Resolution: Fixed
Thanks for the reviews [~gjacoby] [~kozdemir] [~apurtell]
> During HBase 2 upgrade Phoenix Self healing task fails to create server side
> connection before reading SYSTEM.TASK
> ------------------------------------------------------------------------------------------------------------------
>
> Key: PHOENIX-6758
> URL: https://issues.apache.org/jira/browse/PHOENIX-6758
> Project: Phoenix
> Issue Type: Bug
> Affects Versions: 5.1.2
> Reporter: Viraj Jasani
> Assignee: Viraj Jasani
> Priority: Major
> Fix For: 5.2.0, 5.1.3
>
>
> When HBase master is running on 1.x version and regionservers are on 2.x
> version (specifically system rsgroup regionservers), TaskRegionObserver fails
> to initiate connection to read SYSTEM.TASK records. While this task itself is
> not a customer facing usecase, but we should fix the task being unable to
> initiate connection for the first time. Once the connection is created, it is
> anyways cached in CQSI.
>
> Detailed stacktrace:
> {code:java}
> ERROR [pool-54-thread-2] coprocessor.TaskRegionObserver: SelfHealingTask
> failed!
> org.apache.phoenix.exception.PhoenixIOException:
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column
> family table does not exist in region hbase:meta,,1.1588230740 in table
> 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 =>
> '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}},
> {NAME => 'info', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE',
> DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0',
> REPLICATION_SCOPE => '0', BLOOMFILTER => 'NONE', IN_MEMORY => 'true',
> COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '8192', METADATA =>
> {'CACHE_DATA_IN_L1' => 'true'}}
> at
> org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:8476)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.prepareGet(HRegion.java:8008)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2586)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2530)
> at
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45815)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:385)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349)
>
> at
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:138)
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1542)
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1936)
> at
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:3084)
> at
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1111)
> at
> org.apache.phoenix.compile.CreateTableCompiler$CreateTableMutationPlan.execute(CreateTableCompiler.java:420)
> at
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:415)
> at
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:397)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:396)
> at
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:384)
> at
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1906)
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:3290)
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:3253)
> at
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:3253)
> at
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
> at
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:144)
> at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
> at java.sql.DriverManager.getConnection(DriverManager.java:664)
> at java.sql.DriverManager.getConnection(DriverManager.java:208)
> at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:422)
> at
> org.apache.phoenix.util.QueryUtil.getConnectionOnServer(QueryUtil.java:400)
> at
> org.apache.phoenix.util.QueryUtil.getConnectionOnServer(QueryUtil.java:381)
> at
> org.apache.phoenix.coprocessor.TaskRegionObserver$SelfHealingTask.run(TaskRegionObserver.java:162)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
>
> We should support fallback for admin#tableExists for Phoenix server side
> dealing of CQSI#init.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)