[ 
https://issues.apache.org/jira/browse/PHOENIX-5041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16699990#comment-16699990
 ] 

Ievgen Nekrashevych edited comment on PHOENIX-5041 at 11/27/18 6:50 AM:
------------------------------------------------------------------------

[~ckulkarni] autoUpgrade is enabled (default) and doNotUpgrade property is set 
only internally (I don't modify any properties to set that) - no additional 
configuration except map system tables to namespaces set. Namespace TS created 
properly, table is also created properly. This happens only with index.
Also I've noticed when ConnectionQueryServices throws UpgradeRequiredException 
with latest timestamp to migrate system tables - SYSTEM:MUTEX is ALWAYS locked 
before checking that we need an upgrade. This leads to 
UpgradeInProgressException upon concurrent access for tables, which is super 
bad!

NOTE: I'm doing FRESH installation, no upgrade scenario is tested. All System 
tables are created correctly. Only on triggering index creation (I guess when 
region server establishes connection to master for the first time) it tries to 
migrate system tables twice.

If needed I can provide a docker/docker-compose script to reproduce the issue.


was (Author: inekrashevych):
[~ckulkarni] autoUpgrade is enabled (default) and doNotUpgrade property is set 
only internally (I don't modify any properties to set that) - no additional 
configuration except map system tables to namespaces set. Namespace TS created 
properly, table is also created properly. This happens only with index.
Also I've noticed when ConnectionQueryServices throws UpgradeRequiredException 
with latest timestamp to migrate system tables - SYSTEM:MUTEX is ALWAYS locked 
before checking that we need an upgrade. This leads to 
UpgradeInProgressException upon concurrent access for tables, which is super 
bad!

NOTE: I'm doing FRESH installation, no upgrade scenario is tested. All System 
tables are created correctly. Only on triggering index creation (I guess when 
region server establishes connection to master for the first time) it tries to 
migrate system tables twice.

> can't create local index due to UpgradeRequiredException
> --------------------------------------------------------
>
>                 Key: PHOENIX-5041
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-5041
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.14.0, 4.14.1
>            Reporter: Ievgen Nekrashevych
>            Priority: Major
>
> Having isNamespaceMappingEnabled property set on both client and server 
> launching creation of local index on a fresh start issues 
> UpgradeRequiredException, however launching EXECUTE UPGRADE says no upgrade 
> required:
> {code}
> sql> create local index if not exists "BLATEST_INDEX" on BLA2."test" 
> (STR,STARTTIME)
> [2018-11-22 09:39:47] [00000][-1] Error -1 (00000) : Error while executing 
> SQL "create local index if not exists "BLATEST_INDEX" on BLA2."test" 
> (STR,STARTTIME)": Remote driver error: RuntimeException: 
> java.sql.SQLException: ERROR 2011 (INT13): Operation not allowed since 
> cluster hasn't been upgraded. Call EXECUTE UPGRADE.  BLA2:BLATEST_INDEX -> 
> SQLException: ERROR 2011 (INT13): Operation not allowed since cluster hasn't 
> been upgraded. Call EXECUTE UPGRADE.  BLA2:BLATEST_INDEX -> 
> RemoteWithExtrasException: org.apache.hadoop.hbase.DoNotRetryIOException: 
> ERROR 2011 (INT13): Operation not allowed since cluster hasn't been upgraded. 
> Call EXECUTE UPGRADE.  BLA2:BLATEST_INDEX
> [2018-11-22 09:39:47]         at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:112)
> [2018-11-22 09:39:47]         at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1812)
> [2018-11-22 09:39:47]         at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16372)
> [2018-11-22 09:39:47]         at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7996)
> [2018-11-22 09:39:47]         at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1986)
> [2018-11-22 09:39:47]         at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1968)
> [2018-11-22 09:39:47]         at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
> [2018-11-22 09:39:47]         at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
> [2018-11-22 09:39:47]         at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> [2018-11-22 09:39:47]         at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> [2018-11-22 09:39:47]         at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> [2018-11-22 09:39:47] Caused by: 
> org.apache.phoenix.exception.UpgradeRequiredException: Operation not allowed 
> since cluster hasn't been upgraded. Call EXECUTE UPGRADE.
> [2018-11-22 09:39:47]         at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2596)
> [2018-11-22 09:39:47]         at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2499)
> [2018-11-22 09:39:47]         at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
> [2018-11-22 09:39:47]         at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2499)
> [2018-11-22 09:39:47]         at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
> [2018-11-22 09:39:47]         at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
> [2018-11-22 09:39:47]         at 
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
> [2018-11-22 09:39:47]         at 
> java.sql.DriverManager.getConnection(DriverManager.java:664)
> [2018-11-22 09:39:47]         at 
> java.sql.DriverManager.getConnection(DriverManager.java:208)
> [2018-11-22 09:39:47]         at 
> org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:400)
> [2018-11-22 09:39:47]         at 
> org.apache.phoenix.util.QueryUtil.getConnectionOnServer(QueryUtil.java:379)
> [2018-11-22 09:39:47]         at 
> org.apache.phoenix.util.QueryUtil.getConnectionOnServer(QueryUtil.java:360)
> [2018-11-22 09:39:47]         at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1739)
> [2018-11-22 09:39:47]         ... 9 more
> {code}
> Reproducable with script (launched through the query server):
> {code}
> create schema if not exists TS
> create table if not exists TS.TEST (STR varchar not null,INTCOL bigint not 
> null, STARTTIME integer, DUMMY integer default 0 CONSTRAINT PK PRIMARY KEY 
> (STR, INTCOL))
> create local index if not exists "TEST_INDEX" on TS.TEST (STR,STARTTIME)
> {code}
> Note: this is a fresh installation of 4.14.1 on top of cdh5.14.2.
> Didn't test it on phoenix 5.0. Pretty sure this is the result of PHOENIX-4579



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to