[jira] [Resolved] (PHOENIX-6855) Upgrade from 4.7 to 5+ fails if any of the local indexes exist.

2023-01-18 Thread Sergey Soldatov (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov resolved PHOENIX-6855.
--
Fix Version/s: 5.1.4
   Resolution: Fixed

[~stoty] Thank you for the review!

> Upgrade from 4.7 to 5+ fails if any of the local indexes exist. 
> 
>
> Key: PHOENIX-6855
> URL: https://issues.apache.org/jira/browse/PHOENIX-6855
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.2.0, 5.1.3
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 5.1.4
>
>
> During the upgrade from 4.7 we recreate local indexes, and we are doing it 
> before we complete the system.catalog table is completely updated as well as 
> we create other system tables. So it fails we different types of exceptions 
> (column not found, table not found). To work correctly we need to execute the 
> local index upgrade later when all system tables are created and 
> system.catalog is up to date. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6855) Upgrade from 4.7 to 5+ fails if any of the local indexes exist.

2023-01-16 Thread Sergey Soldatov (Jira)
Sergey Soldatov created PHOENIX-6855:


 Summary: Upgrade from 4.7 to 5+ fails if any of the local indexes 
exist. 
 Key: PHOENIX-6855
 URL: https://issues.apache.org/jira/browse/PHOENIX-6855
 Project: Phoenix
  Issue Type: Bug
  Components: core
Affects Versions: 5.1.3
Reporter: Sergey Soldatov
Assignee: Sergey Soldatov


During the upgrade from 4.7 we recreate local indexes, and we are doing it 
before we complete the system.catalog table is completely updated as well as we 
create other system tables. So it fails we different types of exceptions 
(column not found, table not found). To work correctly we need to execute the 
local index upgrade later when all system tables are created and system.catalog 
is up to date. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6721) CSV bulkload tool fails with FileNotFoundException if --output points to the S3 location

2022-05-31 Thread Sergey Soldatov (Jira)
Sergey Soldatov created PHOENIX-6721:


 Summary: CSV bulkload tool fails with FileNotFoundException if 
--output points to the S3 location
 Key: PHOENIX-6721
 URL: https://issues.apache.org/jira/browse/PHOENIX-6721
 Project: Phoenix
  Issue Type: Bug
  Components: core
Reporter: Sergey Soldatov
Assignee: Sergey Soldatov


We were trying to use CSV bulkload tool with the HBase/Phoenix running on top 
of AWS S3 and found that once we use --output params pointing to  S3, the job 
fails with FNFE



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (PHOENIX-6579) ACL check doesn't honor the namespace mapping for mapped views.

2022-01-11 Thread Sergey Soldatov (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov resolved PHOENIX-6579.
--
Fix Version/s: 5.1.3
   Resolution: Fixed

> ACL check doesn't honor the namespace mapping for mapped views.
> ---
>
> Key: PHOENIX-6579
> URL: https://issues.apache.org/jira/browse/PHOENIX-6579
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.2
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 5.1.3
>
>
> When the namespace mapping and ACLs are enabled and the user tries to create 
> a view on top of the existing HBase table, the query would fail if he doesn't 
> have permissions for the default namespace. 
> {noformat}
> *Error: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions (user=admin/ad...@example.com, scope=default:my_ns.my_table, 
> action=[READ])
>  at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.requireAccess(PhoenixAccessController.java:606)
>  at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.preCreateTable(PhoenixAccessController.java:201)
>  at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$2.call(PhoenixMetaDataCoprocessorHost.java:171)
>  at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$2.call(PhoenixMetaDataCoprocessorHost.java:168)
>  at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$PhoenixObserverOperation.callObserver(PhoenixMetaDataCoprocessorHost.java:86)
>  at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.execOperation(PhoenixMetaDataCoprocessorHost.java:106)
>  at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.preCreateTable(PhoenixMetaDataCoprocessorHost.java:168)
>  at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1900)
>  at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17317)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8313)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2499)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2481)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42286)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:418)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) 
> (state=08000,code=101)
>  {noformat}
> That happens because in the MetaData endpoint implementation we are still 
> using _SchemaUtil.getTableNameAsBytes(schemaName, tableName)_ for the mapped 
> view which knows nothing about namespace mapping, so the ACL check is going 
> against 'default:schema.table'. It could be fixed easy by  replacing the call 
> with _SchemaUtil.getPhysicalHBaseTableName(schemaName, tableName, 
> isNamespaceMapped).getBytes();_



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (PHOENIX-6619) Secondary indexes on the columns with a default value works incorrectly

2022-01-05 Thread Sergey Soldatov (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov resolved PHOENIX-6619.
--
Resolution: Duplicate

> Secondary indexes on the columns with a default value works incorrectly
> ---
>
> Key: PHOENIX-6619
> URL: https://issues.apache.org/jira/browse/PHOENIX-6619
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.2
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
>
> When we create an index on the column that has a default value, this value is 
> always used in the index table.
> {noformat}
> 0: jdbc:phoenix:> CREATE TABLE T5(I1 INTEGER NOT NULL, S1 VARCHAR NOT NULL, 
> D1 DATE  DEFAULT DATE'1900-01-01', CONSTRAINT PK PRIMARY KEY(I1,S1));
> No rows affected (1.165 seconds)
> 0: jdbc:phoenix:> upsert into T5 values (1, 'test', '1900-02-02');
> 1 row affected (0.086 seconds)
> 0: jdbc:phoenix:> create index I5 on T5 (d1);
> select * from t5; 
> 1 row affected (6.115 seconds)
> 0: jdbc:phoenix:> select * from t5;
> ++--++
> | I1 |  S1  | D1 |
> ++--++
> | 1  | test | 1900-01-01 |
> ++--++
> 1 row selected (0.261 seconds)
> 0: jdbc:phoenix:> drop table t5;
> No rows affected (1.552 seconds)
> 0: jdbc:phoenix:> CREATE TABLE T5(I1 INTEGER NOT NULL, S1 VARCHAR NOT NULL, 
> D1 DATE  DEFAULT DATE'1900-01-01', CONSTRAINT PK PRIMARY KEY(I1,S1));
> No rows affected (1.162 seconds)
> 0: jdbc:phoenix:> upsert into T5 values (1, 'test', '1900-02-02');
> 1 row affected (0.082 seconds)
> 0: jdbc:phoenix:> select * from t5;
> ++--++
> | I1 |  S1  | D1 |
> ++--++
> | 1  | test | 1900-02-02 |
> ++--++
> 1 row selected (0.141 seconds)
> 0: jdbc:phoenix:> create index I5 on T5 (d1);
> 1 row affected (6.065 seconds)
> 0: jdbc:phoenix:> select * from t5;
> ++--++
> | I1 |  S1  | D1 |
> ++--++
> | 1  | test | 1900-01-01 |
> ++--++
> 1 row selected (0.268 seconds)
> 0: jdbc:phoenix:> upsert into T5 values (2, 'test2', '1900-03-03');
> 1 row affected (0.088 seconds)
> 0: jdbc:phoenix:> select * from t5;
> ++---++
> | I1 |  S1   | D1 |
> ++---++
> | 1  | test  | 1900-01-01 |
> | 2  | test2 | 1900-01-01 |
> ++---++
> 2 rows selected (0.278 seconds)
> {noformat}
>  This also may lead to an exception during the index creation:
> {noformat}
> 0: jdbc:phoenix:> CREATE TABLE T6 (C1 CHAR(10) NOT NULL default ' '  , I1 
> CHAR(14) DEFAULT ' ',CONSTRAINT PK PRIMARY KEY(C1));
> No rows affected (1.163 seconds)
> 0: jdbc:phoenix:> create index i6 on t6 (I1, C1);
> java.lang.ArrayIndexOutOfBoundsException: 127
>   at 
> org.apache.phoenix.index.IndexMaintainer.initCachedState(IndexMaintainer.java:1763)
>   at 
> org.apache.phoenix.index.IndexMaintainer.(IndexMaintainer.java:629)
>   at 
> org.apache.phoenix.index.IndexMaintainer.create(IndexMaintainer.java:146)
>   at 
> org.apache.phoenix.schema.PTableImpl.getIndexMaintainer(PTableImpl.java:1660)
>   at 
> org.apache.phoenix.compile.ServerBuildIndexCompiler.compile(ServerBuildIndexCompiler.java:103)
>   at 
> org.apache.phoenix.schema.MetaDataClient.getMutationPlanForBuildingIndex(MetaDataClient.java:1391)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1400)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1811)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:547)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:513)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:512)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:500)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:2162)
>   at sqlline.Commands.executeSingleQuery(Commands.java:1054)
>   at sqlline.Commands.execute(Commands.java:1003)
>   at sqlline.Commands.sql(Commands.java:967)
>   at sqlline.SqlLine.dispatch(SqlLine.java:734)
>   at sqlline.SqlLine.begin(SqlLine.java:541)
>   at sqlline.SqlLine.start(SqlLine.java:267)
>   at sqlline.SqlLine.main(SqlLine.java:206)
> 0: jdbc:phoenix:> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (PHOENIX-6619) Secondary indexes on the columns with a default value works incorrectly

2022-01-04 Thread Sergey Soldatov (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov reassigned PHOENIX-6619:


Assignee: Sergey Soldatov

> Secondary indexes on the columns with a default value works incorrectly
> ---
>
> Key: PHOENIX-6619
> URL: https://issues.apache.org/jira/browse/PHOENIX-6619
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
>
> When we create an index on the column that has a default value, this value is 
> always used in the index table.
> {noformat}
> 0: jdbc:phoenix:> CREATE TABLE T5(I1 INTEGER NOT NULL, S1 VARCHAR NOT NULL, 
> D1 DATE  DEFAULT DATE'1900-01-01', CONSTRAINT PK PRIMARY KEY(I1,S1));
> No rows affected (1.165 seconds)
> 0: jdbc:phoenix:> upsert into T5 values (1, 'test', '1900-02-02');
> 1 row affected (0.086 seconds)
> 0: jdbc:phoenix:> create index I5 on T5 (d1);
> select * from t5; 
> 1 row affected (6.115 seconds)
> 0: jdbc:phoenix:> select * from t5;
> ++--++
> | I1 |  S1  | D1 |
> ++--++
> | 1  | test | 1900-01-01 |
> ++--++
> 1 row selected (0.261 seconds)
> 0: jdbc:phoenix:> drop table t5;
> No rows affected (1.552 seconds)
> 0: jdbc:phoenix:> CREATE TABLE T5(I1 INTEGER NOT NULL, S1 VARCHAR NOT NULL, 
> D1 DATE  DEFAULT DATE'1900-01-01', CONSTRAINT PK PRIMARY KEY(I1,S1));
> No rows affected (1.162 seconds)
> 0: jdbc:phoenix:> upsert into T5 values (1, 'test', '1900-02-02');
> 1 row affected (0.082 seconds)
> 0: jdbc:phoenix:> select * from t5;
> ++--++
> | I1 |  S1  | D1 |
> ++--++
> | 1  | test | 1900-02-02 |
> ++--++
> 1 row selected (0.141 seconds)
> 0: jdbc:phoenix:> create index I5 on T5 (d1);
> 1 row affected (6.065 seconds)
> 0: jdbc:phoenix:> select * from t5;
> ++--++
> | I1 |  S1  | D1 |
> ++--++
> | 1  | test | 1900-01-01 |
> ++--++
> 1 row selected (0.268 seconds)
> 0: jdbc:phoenix:> upsert into T5 values (2, 'test2', '1900-03-03');
> 1 row affected (0.088 seconds)
> 0: jdbc:phoenix:> select * from t5;
> ++---++
> | I1 |  S1   | D1 |
> ++---++
> | 1  | test  | 1900-01-01 |
> | 2  | test2 | 1900-01-01 |
> ++---++
> 2 rows selected (0.278 seconds)
> {noformat}
>  This also may lead to an exception during the index creation:
> {noformat}
> 0: jdbc:phoenix:> CREATE TABLE T6 (C1 CHAR(10) NOT NULL default ' '  , I1 
> CHAR(14) DEFAULT ' ',CONSTRAINT PK PRIMARY KEY(C1));
> No rows affected (1.163 seconds)
> 0: jdbc:phoenix:> create index i6 on t6 (I1, C1);
> java.lang.ArrayIndexOutOfBoundsException: 127
>   at 
> org.apache.phoenix.index.IndexMaintainer.initCachedState(IndexMaintainer.java:1763)
>   at 
> org.apache.phoenix.index.IndexMaintainer.(IndexMaintainer.java:629)
>   at 
> org.apache.phoenix.index.IndexMaintainer.create(IndexMaintainer.java:146)
>   at 
> org.apache.phoenix.schema.PTableImpl.getIndexMaintainer(PTableImpl.java:1660)
>   at 
> org.apache.phoenix.compile.ServerBuildIndexCompiler.compile(ServerBuildIndexCompiler.java:103)
>   at 
> org.apache.phoenix.schema.MetaDataClient.getMutationPlanForBuildingIndex(MetaDataClient.java:1391)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1400)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1811)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:547)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:513)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:512)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:500)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:2162)
>   at sqlline.Commands.executeSingleQuery(Commands.java:1054)
>   at sqlline.Commands.execute(Commands.java:1003)
>   at sqlline.Commands.sql(Commands.java:967)
>   at sqlline.SqlLine.dispatch(SqlLine.java:734)
>   at sqlline.SqlLine.begin(SqlLine.java:541)
>   at sqlline.SqlLine.start(SqlLine.java:267)
>   at sqlline.SqlLine.main(SqlLine.java:206)
> 0: jdbc:phoenix:> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6619) Secondary indexes on the columns with a default value works incorrectly

2022-01-04 Thread Sergey Soldatov (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-6619:
-
Component/s: core

> Secondary indexes on the columns with a default value works incorrectly
> ---
>
> Key: PHOENIX-6619
> URL: https://issues.apache.org/jira/browse/PHOENIX-6619
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
>
> When we create an index on the column that has a default value, this value is 
> always used in the index table.
> {noformat}
> 0: jdbc:phoenix:> CREATE TABLE T5(I1 INTEGER NOT NULL, S1 VARCHAR NOT NULL, 
> D1 DATE  DEFAULT DATE'1900-01-01', CONSTRAINT PK PRIMARY KEY(I1,S1));
> No rows affected (1.165 seconds)
> 0: jdbc:phoenix:> upsert into T5 values (1, 'test', '1900-02-02');
> 1 row affected (0.086 seconds)
> 0: jdbc:phoenix:> create index I5 on T5 (d1);
> select * from t5; 
> 1 row affected (6.115 seconds)
> 0: jdbc:phoenix:> select * from t5;
> ++--++
> | I1 |  S1  | D1 |
> ++--++
> | 1  | test | 1900-01-01 |
> ++--++
> 1 row selected (0.261 seconds)
> 0: jdbc:phoenix:> drop table t5;
> No rows affected (1.552 seconds)
> 0: jdbc:phoenix:> CREATE TABLE T5(I1 INTEGER NOT NULL, S1 VARCHAR NOT NULL, 
> D1 DATE  DEFAULT DATE'1900-01-01', CONSTRAINT PK PRIMARY KEY(I1,S1));
> No rows affected (1.162 seconds)
> 0: jdbc:phoenix:> upsert into T5 values (1, 'test', '1900-02-02');
> 1 row affected (0.082 seconds)
> 0: jdbc:phoenix:> select * from t5;
> ++--++
> | I1 |  S1  | D1 |
> ++--++
> | 1  | test | 1900-02-02 |
> ++--++
> 1 row selected (0.141 seconds)
> 0: jdbc:phoenix:> create index I5 on T5 (d1);
> 1 row affected (6.065 seconds)
> 0: jdbc:phoenix:> select * from t5;
> ++--++
> | I1 |  S1  | D1 |
> ++--++
> | 1  | test | 1900-01-01 |
> ++--++
> 1 row selected (0.268 seconds)
> 0: jdbc:phoenix:> upsert into T5 values (2, 'test2', '1900-03-03');
> 1 row affected (0.088 seconds)
> 0: jdbc:phoenix:> select * from t5;
> ++---++
> | I1 |  S1   | D1 |
> ++---++
> | 1  | test  | 1900-01-01 |
> | 2  | test2 | 1900-01-01 |
> ++---++
> 2 rows selected (0.278 seconds)
> {noformat}
>  This also may lead to an exception during the index creation:
> {noformat}
> 0: jdbc:phoenix:> CREATE TABLE T6 (C1 CHAR(10) NOT NULL default ' '  , I1 
> CHAR(14) DEFAULT ' ',CONSTRAINT PK PRIMARY KEY(C1));
> No rows affected (1.163 seconds)
> 0: jdbc:phoenix:> create index i6 on t6 (I1, C1);
> java.lang.ArrayIndexOutOfBoundsException: 127
>   at 
> org.apache.phoenix.index.IndexMaintainer.initCachedState(IndexMaintainer.java:1763)
>   at 
> org.apache.phoenix.index.IndexMaintainer.(IndexMaintainer.java:629)
>   at 
> org.apache.phoenix.index.IndexMaintainer.create(IndexMaintainer.java:146)
>   at 
> org.apache.phoenix.schema.PTableImpl.getIndexMaintainer(PTableImpl.java:1660)
>   at 
> org.apache.phoenix.compile.ServerBuildIndexCompiler.compile(ServerBuildIndexCompiler.java:103)
>   at 
> org.apache.phoenix.schema.MetaDataClient.getMutationPlanForBuildingIndex(MetaDataClient.java:1391)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1400)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1811)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:547)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:513)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:512)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:500)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:2162)
>   at sqlline.Commands.executeSingleQuery(Commands.java:1054)
>   at sqlline.Commands.execute(Commands.java:1003)
>   at sqlline.Commands.sql(Commands.java:967)
>   at sqlline.SqlLine.dispatch(SqlLine.java:734)
>   at sqlline.SqlLine.begin(SqlLine.java:541)
>   at sqlline.SqlLine.start(SqlLine.java:267)
>   at sqlline.SqlLine.main(SqlLine.java:206)
> 0: jdbc:phoenix:> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6619) Secondary indexes on the columns with a default value works incorrectly

2022-01-04 Thread Sergey Soldatov (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-6619:
-
Affects Version/s: 5.1.2

> Secondary indexes on the columns with a default value works incorrectly
> ---
>
> Key: PHOENIX-6619
> URL: https://issues.apache.org/jira/browse/PHOENIX-6619
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.2
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
>
> When we create an index on the column that has a default value, this value is 
> always used in the index table.
> {noformat}
> 0: jdbc:phoenix:> CREATE TABLE T5(I1 INTEGER NOT NULL, S1 VARCHAR NOT NULL, 
> D1 DATE  DEFAULT DATE'1900-01-01', CONSTRAINT PK PRIMARY KEY(I1,S1));
> No rows affected (1.165 seconds)
> 0: jdbc:phoenix:> upsert into T5 values (1, 'test', '1900-02-02');
> 1 row affected (0.086 seconds)
> 0: jdbc:phoenix:> create index I5 on T5 (d1);
> select * from t5; 
> 1 row affected (6.115 seconds)
> 0: jdbc:phoenix:> select * from t5;
> ++--++
> | I1 |  S1  | D1 |
> ++--++
> | 1  | test | 1900-01-01 |
> ++--++
> 1 row selected (0.261 seconds)
> 0: jdbc:phoenix:> drop table t5;
> No rows affected (1.552 seconds)
> 0: jdbc:phoenix:> CREATE TABLE T5(I1 INTEGER NOT NULL, S1 VARCHAR NOT NULL, 
> D1 DATE  DEFAULT DATE'1900-01-01', CONSTRAINT PK PRIMARY KEY(I1,S1));
> No rows affected (1.162 seconds)
> 0: jdbc:phoenix:> upsert into T5 values (1, 'test', '1900-02-02');
> 1 row affected (0.082 seconds)
> 0: jdbc:phoenix:> select * from t5;
> ++--++
> | I1 |  S1  | D1 |
> ++--++
> | 1  | test | 1900-02-02 |
> ++--++
> 1 row selected (0.141 seconds)
> 0: jdbc:phoenix:> create index I5 on T5 (d1);
> 1 row affected (6.065 seconds)
> 0: jdbc:phoenix:> select * from t5;
> ++--++
> | I1 |  S1  | D1 |
> ++--++
> | 1  | test | 1900-01-01 |
> ++--++
> 1 row selected (0.268 seconds)
> 0: jdbc:phoenix:> upsert into T5 values (2, 'test2', '1900-03-03');
> 1 row affected (0.088 seconds)
> 0: jdbc:phoenix:> select * from t5;
> ++---++
> | I1 |  S1   | D1 |
> ++---++
> | 1  | test  | 1900-01-01 |
> | 2  | test2 | 1900-01-01 |
> ++---++
> 2 rows selected (0.278 seconds)
> {noformat}
>  This also may lead to an exception during the index creation:
> {noformat}
> 0: jdbc:phoenix:> CREATE TABLE T6 (C1 CHAR(10) NOT NULL default ' '  , I1 
> CHAR(14) DEFAULT ' ',CONSTRAINT PK PRIMARY KEY(C1));
> No rows affected (1.163 seconds)
> 0: jdbc:phoenix:> create index i6 on t6 (I1, C1);
> java.lang.ArrayIndexOutOfBoundsException: 127
>   at 
> org.apache.phoenix.index.IndexMaintainer.initCachedState(IndexMaintainer.java:1763)
>   at 
> org.apache.phoenix.index.IndexMaintainer.(IndexMaintainer.java:629)
>   at 
> org.apache.phoenix.index.IndexMaintainer.create(IndexMaintainer.java:146)
>   at 
> org.apache.phoenix.schema.PTableImpl.getIndexMaintainer(PTableImpl.java:1660)
>   at 
> org.apache.phoenix.compile.ServerBuildIndexCompiler.compile(ServerBuildIndexCompiler.java:103)
>   at 
> org.apache.phoenix.schema.MetaDataClient.getMutationPlanForBuildingIndex(MetaDataClient.java:1391)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1400)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1811)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:547)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:513)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:512)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:500)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:2162)
>   at sqlline.Commands.executeSingleQuery(Commands.java:1054)
>   at sqlline.Commands.execute(Commands.java:1003)
>   at sqlline.Commands.sql(Commands.java:967)
>   at sqlline.SqlLine.dispatch(SqlLine.java:734)
>   at sqlline.SqlLine.begin(SqlLine.java:541)
>   at sqlline.SqlLine.start(SqlLine.java:267)
>   at sqlline.SqlLine.main(SqlLine.java:206)
> 0: jdbc:phoenix:> 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (PHOENIX-6619) Secondary indexes on the columns with a default value works incorrectly

2022-01-04 Thread Sergey Soldatov (Jira)
Sergey Soldatov created PHOENIX-6619:


 Summary: Secondary indexes on the columns with a default value 
works incorrectly
 Key: PHOENIX-6619
 URL: https://issues.apache.org/jira/browse/PHOENIX-6619
 Project: Phoenix
  Issue Type: Bug
Reporter: Sergey Soldatov


When we create an index on the column that has a default value, this value is 
always used in the index table.
{noformat}
0: jdbc:phoenix:> CREATE TABLE T5(I1 INTEGER NOT NULL, S1 VARCHAR NOT NULL, D1 
DATE  DEFAULT DATE'1900-01-01', CONSTRAINT PK PRIMARY KEY(I1,S1));
No rows affected (1.165 seconds)
0: jdbc:phoenix:> upsert into T5 values (1, 'test', '1900-02-02');
1 row affected (0.086 seconds)
0: jdbc:phoenix:> create index I5 on T5 (d1);
select * from t5; 
1 row affected (6.115 seconds)
0: jdbc:phoenix:> select * from t5;
++--++
| I1 |  S1  | D1 |
++--++
| 1  | test | 1900-01-01 |
++--++
1 row selected (0.261 seconds)
0: jdbc:phoenix:> drop table t5;
No rows affected (1.552 seconds)
0: jdbc:phoenix:> CREATE TABLE T5(I1 INTEGER NOT NULL, S1 VARCHAR NOT NULL, D1 
DATE  DEFAULT DATE'1900-01-01', CONSTRAINT PK PRIMARY KEY(I1,S1));
No rows affected (1.162 seconds)
0: jdbc:phoenix:> upsert into T5 values (1, 'test', '1900-02-02');
1 row affected (0.082 seconds)
0: jdbc:phoenix:> select * from t5;
++--++
| I1 |  S1  | D1 |
++--++
| 1  | test | 1900-02-02 |
++--++
1 row selected (0.141 seconds)
0: jdbc:phoenix:> create index I5 on T5 (d1);
1 row affected (6.065 seconds)
0: jdbc:phoenix:> select * from t5;
++--++
| I1 |  S1  | D1 |
++--++
| 1  | test | 1900-01-01 |
++--++
1 row selected (0.268 seconds)
0: jdbc:phoenix:> upsert into T5 values (2, 'test2', '1900-03-03');
1 row affected (0.088 seconds)
0: jdbc:phoenix:> select * from t5;
++---++
| I1 |  S1   | D1 |
++---++
| 1  | test  | 1900-01-01 |
| 2  | test2 | 1900-01-01 |
++---++
2 rows selected (0.278 seconds)
{noformat}
 This also may lead to an exception during the index creation:
{noformat}
0: jdbc:phoenix:> CREATE TABLE T6 (C1 CHAR(10) NOT NULL default ' '  , I1 
CHAR(14) DEFAULT ' ',CONSTRAINT PK PRIMARY KEY(C1));
No rows affected (1.163 seconds)
0: jdbc:phoenix:> create index i6 on t6 (I1, C1);
java.lang.ArrayIndexOutOfBoundsException: 127
at 
org.apache.phoenix.index.IndexMaintainer.initCachedState(IndexMaintainer.java:1763)
at 
org.apache.phoenix.index.IndexMaintainer.(IndexMaintainer.java:629)
at 
org.apache.phoenix.index.IndexMaintainer.create(IndexMaintainer.java:146)
at 
org.apache.phoenix.schema.PTableImpl.getIndexMaintainer(PTableImpl.java:1660)
at 
org.apache.phoenix.compile.ServerBuildIndexCompiler.compile(ServerBuildIndexCompiler.java:103)
at 
org.apache.phoenix.schema.MetaDataClient.getMutationPlanForBuildingIndex(MetaDataClient.java:1391)
at 
org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1400)
at 
org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1811)
at 
org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:547)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:513)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:512)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:500)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:2162)
at sqlline.Commands.executeSingleQuery(Commands.java:1054)
at sqlline.Commands.execute(Commands.java:1003)
at sqlline.Commands.sql(Commands.java:967)
at sqlline.SqlLine.dispatch(SqlLine.java:734)
at sqlline.SqlLine.begin(SqlLine.java:541)
at sqlline.SqlLine.start(SqlLine.java:267)
at sqlline.SqlLine.main(SqlLine.java:206)
0: jdbc:phoenix:> 
{noformat}




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6579) ACL check doesn't honor the namespace mapping for mapped views.

2021-10-22 Thread Sergey Soldatov (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-6579:
-
Description: 
When the namespace mapping and ACLs are enabled and the user tries to create a 
view on top of the existing HBase table, the query would fail if he doesn't 
have permissions for the default namespace. 
{noformat}
*Error: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
permissions (user=admin/ad...@example.com, scope=default:my_ns.my_table, 
action=[READ])
 at 
org.apache.phoenix.coprocessor.PhoenixAccessController.requireAccess(PhoenixAccessController.java:606)
 at 
org.apache.phoenix.coprocessor.PhoenixAccessController.preCreateTable(PhoenixAccessController.java:201)
 at 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$2.call(PhoenixMetaDataCoprocessorHost.java:171)
 at 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$2.call(PhoenixMetaDataCoprocessorHost.java:168)
 at 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$PhoenixObserverOperation.callObserver(PhoenixMetaDataCoprocessorHost.java:86)
 at 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.execOperation(PhoenixMetaDataCoprocessorHost.java:106)
 at 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.preCreateTable(PhoenixMetaDataCoprocessorHost.java:168)
 at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1900)
 at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17317)
 at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8313)
 at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2499)
 at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2481)
 at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42286)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:418)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) 
(state=08000,code=101)
 {noformat}
That happens because in the MetaData endpoint implementation we are still using 
_SchemaUtil.getTableNameAsBytes(schemaName, tableName)_ for the mapped view 
which knows nothing about namespace mapping, so the ACL check is going against 
'default:schema.table'. It could be fixed easy by  replacing the call with 
_SchemaUtil.getPhysicalHBaseTableName(schemaName, tableName, 
isNamespaceMapped).getBytes();_

  was:
When the namespace mapping and ACLs are enabled and the user tries to create a 
view on top of the existing HBase table, the query would fail if he doesn't 
have permissions for the default namespace. 
{noformat}
*Error: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
permissions (user=admin/ad...@coelab.cloudera.com, 
scope=default:my_ns.my_table, action=[READ])
 at 
org.apache.phoenix.coprocessor.PhoenixAccessController.requireAccess(PhoenixAccessController.java:606)
 at 
org.apache.phoenix.coprocessor.PhoenixAccessController.preCreateTable(PhoenixAccessController.java:201)
 at 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$2.call(PhoenixMetaDataCoprocessorHost.java:171)
 at 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$2.call(PhoenixMetaDataCoprocessorHost.java:168)
 at 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$PhoenixObserverOperation.callObserver(PhoenixMetaDataCoprocessorHost.java:86)
 at 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.execOperation(PhoenixMetaDataCoprocessorHost.java:106)
 at 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.preCreateTable(PhoenixMetaDataCoprocessorHost.java:168)
 at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1900)
 at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17317)
 at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8313)
 at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2499)
 at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2481)
 at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42286)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:418)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) 
(state=08000,code=101)
 {noformat}
That happens because in th

[jira] [Assigned] (PHOENIX-6579) ACL check doesn't honor the namespace mapping for mapped views.

2021-10-22 Thread Sergey Soldatov (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov reassigned PHOENIX-6579:


Assignee: Sergey Soldatov

> ACL check doesn't honor the namespace mapping for mapped views.
> ---
>
> Key: PHOENIX-6579
> URL: https://issues.apache.org/jira/browse/PHOENIX-6579
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.2
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
>
> When the namespace mapping and ACLs are enabled and the user tries to create 
> a view on top of the existing HBase table, the query would fail if he doesn't 
> have permissions for the default namespace. 
> {noformat}
> *Error: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions (user=admin/ad...@coelab.cloudera.com, 
> scope=default:my_ns.my_table, action=[READ])
>  at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.requireAccess(PhoenixAccessController.java:606)
>  at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.preCreateTable(PhoenixAccessController.java:201)
>  at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$2.call(PhoenixMetaDataCoprocessorHost.java:171)
>  at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$2.call(PhoenixMetaDataCoprocessorHost.java:168)
>  at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$PhoenixObserverOperation.callObserver(PhoenixMetaDataCoprocessorHost.java:86)
>  at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.execOperation(PhoenixMetaDataCoprocessorHost.java:106)
>  at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.preCreateTable(PhoenixMetaDataCoprocessorHost.java:168)
>  at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1900)
>  at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17317)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8313)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2499)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2481)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42286)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:418)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) 
> (state=08000,code=101)
>  {noformat}
> That happens because in the MetaData endpoint implementation we are still 
> using _SchemaUtil.getTableNameAsBytes(schemaName, tableName)_ for the mapped 
> view which knows nothing about namespace mapping, so the ACL check is going 
> against 'default:schema.table'. It could be fixed easy by  replacing the call 
> with _SchemaUtil.getPhysicalHBaseTableName(schemaName, tableName, 
> isNamespaceMapped).getBytes();_



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6579) ACL check doesn't honor the namespace mapping for mapped views.

2021-10-22 Thread Sergey Soldatov (Jira)
Sergey Soldatov created PHOENIX-6579:


 Summary: ACL check doesn't honor the namespace mapping for mapped 
views.
 Key: PHOENIX-6579
 URL: https://issues.apache.org/jira/browse/PHOENIX-6579
 Project: Phoenix
  Issue Type: Bug
  Components: core
Affects Versions: 5.1.2
Reporter: Sergey Soldatov


When the namespace mapping and ACLs are enabled and the user tries to create a 
view on top of the existing HBase table, the query would fail if he doesn't 
have permissions for the default namespace. 
{noformat}
*Error: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
permissions (user=admin/ad...@coelab.cloudera.com, 
scope=default:my_ns.my_table, action=[READ])
 at 
org.apache.phoenix.coprocessor.PhoenixAccessController.requireAccess(PhoenixAccessController.java:606)
 at 
org.apache.phoenix.coprocessor.PhoenixAccessController.preCreateTable(PhoenixAccessController.java:201)
 at 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$2.call(PhoenixMetaDataCoprocessorHost.java:171)
 at 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$2.call(PhoenixMetaDataCoprocessorHost.java:168)
 at 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$PhoenixObserverOperation.callObserver(PhoenixMetaDataCoprocessorHost.java:86)
 at 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.execOperation(PhoenixMetaDataCoprocessorHost.java:106)
 at 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.preCreateTable(PhoenixMetaDataCoprocessorHost.java:168)
 at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1900)
 at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17317)
 at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8313)
 at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2499)
 at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2481)
 at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42286)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:418)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) 
(state=08000,code=101)
 {noformat}
That happens because in the MetaData endpoint implementation we are still using 
_SchemaUtil.getTableNameAsBytes(schemaName, tableName)_ for the mapped view 
which knows nothing about namespace mapping, so the ACL check is going against 
'default:schema.table'. It could be fixed easy by  replacing the call with 
_SchemaUtil.getPhysicalHBaseTableName(schemaName, tableName, 
isNamespaceMapped).getBytes();_



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5009) Views doesn't handle subqueries correctly

2018-11-09 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created PHOENIX-5009:


 Summary: Views doesn't handle subqueries correctly
 Key: PHOENIX-5009
 URL: https://issues.apache.org/jira/browse/PHOENIX-5009
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.15.0
Reporter: Sergey Soldatov


We allow subqueries in views, but we don't handle them as we usually do in the 
regular queries. Some examples:
1. type compatibility:
{code}
0: jdbc:phoenix:> create table a (id integer primary key, v varchar);
No rows affected (2.357 seconds)
0: jdbc:phoenix:> create view view1 as select * from A where v > (select 
current_time());
No rows affected (0.046 seconds)
0: jdbc:phoenix:> select * from view1;
Error: ERROR 203 (22005): Type mismatch. VARCHAR and TIME ARRAY for V > 
ARRAY['2018-11-09 19:33:17.977'] (state=22005,code=203)
org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
mismatch. VARCHAR and TIME ARRAY for V > ARRAY['2018-11-09 19:33:17.977']
at 
org.apache.phoenix.schema.TypeMismatchException.newException(TypeMismatchException.java:53)
at 
org.apache.phoenix.expression.ComparisonExpression.create(ComparisonExpression.java:133)
at 
org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:234)
at 
org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:146)
at 
org.apache.phoenix.parse.ComparisonParseNode.accept(ComparisonParseNode.java:47)
at 
org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:147)
at 
org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:108)
at 
org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:237)
at 
org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:144)
at 
org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:139)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:312)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:283)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1830)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
{code}
2. single-row subquery:
{code}
0: jdbc:phoenix:> create view view2 as select * from A where v > (select 
to_char(current_time()));
No rows affected (0.036 seconds)
0: jdbc:phoenix:> select * from view2;
Error: ERROR 203 (22005): Type mismatch. VARCHAR and VARCHAR ARRAY for V > 
ARRAY['2018-11-09 19:34:37.766'] (state=22005,code=203)
org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
mismatch. VARCHAR and VARCHAR ARRAY for V > ARRAY['2018-11-09 19:34:37.766']
at 
org.apache.phoenix.schema.TypeMismatchException.newException(TypeMismatchException.java:53)
at 
org.apache.phoenix.expression.ComparisonExpression.create(ComparisonExpression.java:133)
at 
org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:234)
at 
org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:146)
at 
org.apache.phoenix.parse.ComparisonParseNode.accept(ComparisonParseNode.java:47)
at 
org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:147)
at 
org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:108)
at 
org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:237)
at 
org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:144)
at 
org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:139)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:312)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:283)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1830)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at 

[jira] [Updated] (PHOENIX-3236) Problem with shading apache commons on Azure.

2018-08-24 Thread Sergey Soldatov (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-3236:
-
Fix Version/s: 5.1.0
   4.15.0

> Problem with shading apache commons on Azure.
> -
>
> Key: PHOENIX-3236
> URL: https://issues.apache.org/jira/browse/PHOENIX-3236
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-3236-1.patch
>
>
> When Phoenix is used on top of Azure FS, the following exception may happen 
> during execution of sqlline:
> {noformat}
> Caused by: java.lang.AbstractMethodError: 
> org.apache.hadoop.metrics2.sink.WasbAzureIaasSink.init(Lorg/apache/phoenix/shaded/org/apache/commons/configuration/SubsetConfiguration;)V
> at 
> org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:199)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.newSink(MetricsSystemImpl.java:529)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSinks(MetricsSystemImpl.java:501)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:480)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:189)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:164)
> at 
> org.apache.hadoop.fs.azure.metrics.AzureFileSystemMetricsSystem.fileSystemStarted(AzureFileSystemMetricsSystem.java:41)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.initialize(NativeAzureFileSystem.java:1155)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2736)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2770)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2752)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:386)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:179)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
> at 
> org.apache.hadoop.hbase.util.DynamicClassLoader.initTempDir(DynamicClassLoader.java:118)
> at 
> org.apache.hadoop.hbase.util.DynamicClassLoader.(DynamicClassLoader.java:98)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.(ProtobufUtil.java:249)
> at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
> at 
> org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75)
> at 
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:105)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:886)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:642)
> ... 32 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3991) ROW_TIMESTAMP on TIMESTAMP column type throws ArrayOutOfBound when upserting without providing a value.

2018-07-25 Thread Sergey Soldatov (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-3991:
-
Attachment: PHOENIX-3991-1.patch

> ROW_TIMESTAMP on TIMESTAMP column type throws ArrayOutOfBound when upserting 
> without providing a value.
> ---
>
> Key: PHOENIX-3991
> URL: https://issues.apache.org/jira/browse/PHOENIX-3991
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Eric Belanger
>Assignee: Sergey Soldatov
>Priority: Major
> Attachments: PHOENIX-3991-1.patch
>
>
> {code:sql}
> CREATE TABLE TEST (
>   CREATED TIMESTAMP NOT NULL,
>   ID CHAR(36) NOT NULL,
>   DEFINITION VARCHAR,
>   CONSTRAINT TEST_PK PRIMARY KEY (CREATED ROW_TIMESTAMP, ID)
> )
> -- WORKS
> UPSERT INTO TEST (CREATED, ID, DEFINITION) VALUES (NOW(), 'A', 'DEFINITION 
> A');
> -- ArrayOutOfBoundException
> UPSERT INTO TEST (ID, DEFINITION) VALUES ('A', 'DEFINITION A');
> {code}
> Stack Trace:
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 8
>   at 
> org.apache.phoenix.execute.MutationState.getNewRowKeyWithRowTimestamp(MutationState.java:554)
>   at 
> org.apache.phoenix.execute.MutationState.generateMutations(MutationState.java:640)
>   at 
> org.apache.phoenix.execute.MutationState.addRowMutations(MutationState.java:572)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1003)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1469)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1301)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:539)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:536)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:536)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-3991) ROW_TIMESTAMP on TIMESTAMP column type throws ArrayOutOfBound when upserting without providing a value.

2018-07-25 Thread Sergey Soldatov (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov reassigned PHOENIX-3991:


Assignee: Sergey Soldatov

> ROW_TIMESTAMP on TIMESTAMP column type throws ArrayOutOfBound when upserting 
> without providing a value.
> ---
>
> Key: PHOENIX-3991
> URL: https://issues.apache.org/jira/browse/PHOENIX-3991
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Eric Belanger
>Assignee: Sergey Soldatov
>Priority: Major
>
> {code:sql}
> CREATE TABLE TEST (
>   CREATED TIMESTAMP NOT NULL,
>   ID CHAR(36) NOT NULL,
>   DEFINITION VARCHAR,
>   CONSTRAINT TEST_PK PRIMARY KEY (CREATED ROW_TIMESTAMP, ID)
> )
> -- WORKS
> UPSERT INTO TEST (CREATED, ID, DEFINITION) VALUES (NOW(), 'A', 'DEFINITION 
> A');
> -- ArrayOutOfBoundException
> UPSERT INTO TEST (ID, DEFINITION) VALUES ('A', 'DEFINITION A');
> {code}
> Stack Trace:
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 8
>   at 
> org.apache.phoenix.execute.MutationState.getNewRowKeyWithRowTimestamp(MutationState.java:554)
>   at 
> org.apache.phoenix.execute.MutationState.generateMutations(MutationState.java:640)
>   at 
> org.apache.phoenix.execute.MutationState.addRowMutations(MutationState.java:572)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1003)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1469)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1301)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:539)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:536)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:536)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-2341) Rename in ALTER statement

2018-07-13 Thread Sergey Soldatov (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-2341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543731#comment-16543731
 ] 

Sergey Soldatov commented on PHOENIX-2341:
--

Obviously, that should be easy to do for encoded column names. 

> Rename in ALTER statement
> -
>
> Key: PHOENIX-2341
> URL: https://issues.apache.org/jira/browse/PHOENIX-2341
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: alex kamil
>Priority: Minor
>  Labels: newbie
>
> Add RENAME functionality in ALTER statement (e.g. similar to PostgreSQL): 
> ALTER TABLE name 
> RENAME  column TO new_column
> ALTER TABLE name
> RENAME TO new_name
> ALTER TABLE name
> SET SCHEMA new_schema
> Reference: http://www.postgresql.org/docs/9.1/static/sql-altertable.html
> Related: PHOENIX-152, PHOENIX-1598 , PHOENIX-1940



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4807) Document query behavior when statistics unavailable for some table regions.

2018-07-11 Thread Sergey Soldatov (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16540663#comment-16540663
 ] 

Sergey Soldatov commented on PHOENIX-4807:
--

I've updated the page with a known issues section. Not sure whether a real 
example is required, because the end user usually doesn't care about how it 
works under the hood.

> Document query behavior when statistics unavailable for some table regions.
> ---
>
> Key: PHOENIX-4807
> URL: https://issues.apache.org/jira/browse/PHOENIX-4807
> Project: Phoenix
>  Issue Type: Task
>Reporter: Guru Prateek Pinnadhari
>Assignee: Sergey Soldatov
>Priority: Major
>
> When troubleshooting an issue with retrieval of duplicate records for a 
> select query on same PK, the behavior seen was: query plan's collection of 
> Scan objects were such that they led to generation of overlapping scans on a 
> salted table's regions.
> A contributing factor was statistics being unavailable for the terminal 
> regions of the table. Filing Jira to add a note about this on 
> [https://phoenix.apache.org/update_statistics.html]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4807) Document query behavior when statistics unavailable for some table regions.

2018-07-11 Thread Sergey Soldatov (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov resolved PHOENIX-4807.
--
Resolution: Fixed

> Document query behavior when statistics unavailable for some table regions.
> ---
>
> Key: PHOENIX-4807
> URL: https://issues.apache.org/jira/browse/PHOENIX-4807
> Project: Phoenix
>  Issue Type: Task
>Reporter: Guru Prateek Pinnadhari
>Assignee: Sergey Soldatov
>Priority: Major
>
> When troubleshooting an issue with retrieval of duplicate records for a 
> select query on same PK, the behavior seen was: query plan's collection of 
> Scan objects were such that they led to generation of overlapping scans on a 
> salted table's regions.
> A contributing factor was statistics being unavailable for the terminal 
> regions of the table. Filing Jira to add a note about this on 
> [https://phoenix.apache.org/update_statistics.html]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4751) Support client-side hash aggregation with SORT_MERGE_JOIN

2018-07-09 Thread Sergey Soldatov (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16537754#comment-16537754
 ] 

Sergey Soldatov commented on PHOENIX-4751:
--

[~sangudi] you may squash your commits and open a new PR. At least that's the 
way I use :)

> Support client-side hash aggregation with SORT_MERGE_JOIN
> -
>
> Key: PHOENIX-4751
> URL: https://issues.apache.org/jira/browse/PHOENIX-4751
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 4.13.1
>Reporter: Gerald Sangudi
>Priority: Major
>
> A GROUP BY that follows a SORT_MERGE_JOIN should be able to use hash 
> aggregation in some cases, for improved performance.
> When a GROUP BY follows a SORT_MERGE_JOIN, the GROUP BY does not use hash 
> aggregation. It instead performs a CLIENT SORT followed by a CLIENT 
> AGGREGATE. The performance can be improved if (a) the GROUP BY output does 
> not need to be sorted, and (b) the GROUP BY input is large enough and has low 
> cardinality.
> The hash aggregation can initially be a hint. Here is an example from Phoenix 
> 4.13.1 that would benefit from hash aggregation if the GROUP BY input is 
> large with low cardinality.
> CREATE TABLE unsalted (
>  keyA BIGINT NOT NULL,
>  keyB BIGINT NOT NULL,
>  val SMALLINT,
>  CONSTRAINT pk PRIMARY KEY (keyA, keyB)
>  );
> EXPLAIN
>  SELECT /*+ USE_SORT_MERGE_JOIN */ 
>  t1.val v1, t2.val v2, COUNT(\*) c 
>  FROM unsalted t1 JOIN unsalted t2 
>  ON (t1.keyA = t2.keyA) 
>  GROUP BY t1.val, t2.val;
>  
> +-+++--+
> |PLAN|EST_BYTES_READ|EST_ROWS_READ| |
> +-+++--+
> |SORT-MERGE-JOIN (INNER) TABLES|null|null| |
> |    CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED|null|null| |
> |AND|null|null| |
> |    CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED|null|null| |
> |CLIENT SORTED BY [TO_DECIMAL(T1.VAL), T2.VAL]|null|null| |
> |CLIENT AGGREGATE INTO DISTINCT ROWS BY [T1.VAL, T2.VAL]|null|null| |
> +-+++--+



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4751) Support client-side hash aggregation with SORT_MERGE_JOIN

2018-07-09 Thread Sergey Soldatov (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16537714#comment-16537714
 ] 

Sergey Soldatov commented on PHOENIX-4751:
--

[~sangudi] Could you please remove unnecessary commits from the review? (I mean 
PHOENIX-4789/4785)? 

> Support client-side hash aggregation with SORT_MERGE_JOIN
> -
>
> Key: PHOENIX-4751
> URL: https://issues.apache.org/jira/browse/PHOENIX-4751
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 4.13.1
>Reporter: Gerald Sangudi
>Priority: Major
>
> A GROUP BY that follows a SORT_MERGE_JOIN should be able to use hash 
> aggregation in some cases, for improved performance.
> When a GROUP BY follows a SORT_MERGE_JOIN, the GROUP BY does not use hash 
> aggregation. It instead performs a CLIENT SORT followed by a CLIENT 
> AGGREGATE. The performance can be improved if (a) the GROUP BY output does 
> not need to be sorted, and (b) the GROUP BY input is large enough and has low 
> cardinality.
> The hash aggregation can initially be a hint. Here is an example from Phoenix 
> 4.13.1 that would benefit from hash aggregation if the GROUP BY input is 
> large with low cardinality.
> CREATE TABLE unsalted (
>  keyA BIGINT NOT NULL,
>  keyB BIGINT NOT NULL,
>  val SMALLINT,
>  CONSTRAINT pk PRIMARY KEY (keyA, keyB)
>  );
> EXPLAIN
>  SELECT /*+ USE_SORT_MERGE_JOIN */ 
>  t1.val v1, t2.val v2, COUNT(\*) c 
>  FROM unsalted t1 JOIN unsalted t2 
>  ON (t1.keyA = t2.keyA) 
>  GROUP BY t1.val, t2.val;
>  
> +-+++--+
> |PLAN|EST_BYTES_READ|EST_ROWS_READ| |
> +-+++--+
> |SORT-MERGE-JOIN (INNER) TABLES|null|null| |
> |    CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED|null|null| |
> |AND|null|null| |
> |    CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED|null|null| |
> |CLIENT SORTED BY [TO_DECIMAL(T1.VAL), T2.VAL]|null|null| |
> |CLIENT AGGREGATE INTO DISTINCT ROWS BY [T1.VAL, T2.VAL]|null|null| |
> +-+++--+



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4806) make PhoenixStorageHander using only unmanaged tables

2018-07-09 Thread Sergey Soldatov (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16537484#comment-16537484
 ] 

Sergey Soldatov commented on PHOENIX-4806:
--

Actually, the Hive version doesn't matter.  This approach would work with 2.x 
versions as well. The idea is to have the same behavior for both master and 5.0 
branches

> make PhoenixStorageHander using only unmanaged tables
> -
>
> Key: PHOENIX-4806
> URL: https://issues.apache.org/jira/browse/PHOENIX-4806
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
>
> For managed table Hive is heavily using internal statistic which is not 
> updated properly when data is changing from outside. So all custom storage 
> handlers are moving to the schema when only unmanaged (external) tables are 
> using.
> The suggested lifecycle:
> 1) All tables should be created with external keyword
> 2) if Phoenix table doesn't exist, we create a new one and add a flag that it 
> should be deleted on drop table from hive
> 3) If Phoenix table exists, no changes in the existing behavior.
> Some of the storage handlers that are part of Hive distribution are already 
> using this schema. Others are in progress.
> FYI [~elserj]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4806) make PhoenixStorageHander using only unmanaged tables

2018-07-09 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created PHOENIX-4806:


 Summary: make PhoenixStorageHander using only unmanaged tables
 Key: PHOENIX-4806
 URL: https://issues.apache.org/jira/browse/PHOENIX-4806
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0, 5.0.0
Reporter: Sergey Soldatov
Assignee: Sergey Soldatov


For managed table Hive is heavily using internal statistic which is not updated 
properly when data is changing from outside. So all custom storage handlers are 
moving to the schema when only unmanaged (external) tables are using.
The suggested lifecycle:
1) All tables should be created with external keyword
2) if Phoenix table doesn't exist, we create a new one and add a flag that it 
should be deleted on drop table from hive
3) If Phoenix table exists, no changes in the existing behavior.

Some of the storage handlers that are part of Hive distribution are already 
using this schema. Others are in progress.

FYI [~elserj]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4788) Shade Joda libraries in phoenix-server to avoid conflict with hbase shell

2018-06-22 Thread Sergey Soldatov (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16520846#comment-16520846
 ] 

Sergey Soldatov commented on PHOENIX-4788:
--

LGTM +1

> Shade Joda libraries in phoenix-server to avoid conflict with hbase shell
> -
>
> Key: PHOENIX-4788
> URL: https://issues.apache.org/jira/browse/PHOENIX-4788
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0-alpha
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4788.patch
>
>
> HBase-2.0 shell doesn't work if phoenix-server.jar is in classpath
> {code:java}
> RuntimeError: Can't load hbase shell command: list_snapshots. Error: Java 
> method not found: 
> org.joda.time.DateTime.compareTo(org.joda.time.ReadableInstant)
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/date.rb:1406:in 
> `'
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/date.rb:1405:in 
> `'
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/date.rb:229:in `'
> org/jruby/RubyKernel.java:956:in `require'
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/rubygems/core_ext/kernel_require.rb:55:in
>  `require'
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/time.rb:1:in `'
> org/jruby/RubyKernel.java:956:in `require'
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/rubygems/core_ext/kernel_require.rb:55:in
>  `require'
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/time.rb:3:in `'
> org/jruby/RubyKernel.java:956:in `require'
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/rubygems/core_ext/kernel_require.rb:55:in
>  `require'
> /Users/asinghal/git/hortonworks/hbase/hbase-shell/src/main/ruby/shell.rb:41:in
>  `load_command'
> /Users/asinghal/git/hortonworks/hbase/hbase-shell/src/main/ruby/shell.rb:66:in
>  `block in load_command_group'
> org/jruby/RubyArray.java:1735:in `each'
> /Users/asinghal/git/hortonworks/hbase/hbase-shell/src/main/ruby/shell/commands/list_snapshots.rb:1:in
>  `(root)'
> /Users/asinghal/git/hortonworks/hbase/hbase-shell/src/main/ruby/shell/commands/list_snapshots.rb:19:in
>  `'
> org/jruby/RubyKernel.java:956:in `require'
> /Users/asinghal/git/hortonworks/hbase/hbase-shell/src/main/ruby/shell.rb:1:in 
> `(root)'{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4781) Phoenix client project's jar naming convention causes maven-deploy-plugin to fail

2018-06-14 Thread Sergey Soldatov (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513165#comment-16513165
 ] 

Sergey Soldatov commented on PHOENIX-4781:
--

we should change phoenix_utils.py as well because it looks for 
phoenix-*-client.jar to run sqlline. Not very familiar with 
maven-deploy-plugin, but is there a chance to force it to use our names using 
deploy:deploy-file goal?

> Phoenix client project's jar naming convention causes maven-deploy-plugin to 
> fail
> -
>
> Key: PHOENIX-4781
> URL: https://issues.apache.org/jira/browse/PHOENIX-4781
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Karan Mehta
>Priority: Major
> Attachments: PHOENIX-4781.001.patch
>
>
> `maven-deploy-plugin` is used for deploying built artifacts to repository 
> provided by `distributionManagement` tag. The name of files that need to be 
> uploaded are either derived from pom file of the project or it generates an 
> temporary one on its own.
> For `phoenix-client` project, we essentially create a shaded uber jar that 
> contains all dependencies and provide the project pom file for the plugin to 
> work. `maven-jar-plugin` is disabled for the project, hence the shade plugin 
> essentially packages the jar. The final name of the shaded jar is defined as 
> `phoenix-${project.version}-client`, which is different from how the standard 
> maven convention based on pom file (artifact and group id) is 
> `phoenix-client-${project.version}`
> This causes `maven-deploy-plugin` to fail since it is unable to find any 
> artifacts to be published.
> `maven-install-plugin` works correctly and hence it installs correct jar in 
> local repo.
> The same is effective for `phoenix-pig` project as well. However we require 
> the require jar for that project in the repo. I am not even sure why we 
> create shaded jar for that project.
> I will put up a 3 liner patch for the same.
> Any thoughts? [~sergey.soldatov] [~elserj]
> Files before change (first col is size):
> {code:java}
> 103487701 Jun 13 22:47 
> phoenix-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT-client.jar{code}
> Files after change (first col is size):
> {code:java}
> 3640 Jun 13 21:23 
> original-phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar
> 103487702 Jun 13 21:24 
> phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4780) HTable.batch() doesn't handle TableNotFound correctly.

2018-06-12 Thread Sergey Soldatov (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov resolved PHOENIX-4780.
--
Resolution: Invalid

Oops. wrong project :)

> HTable.batch() doesn't handle TableNotFound correctly.
> --
>
> Key: PHOENIX-4780
> URL: https://issues.apache.org/jira/browse/PHOENIX-4780
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Minor
>
> batch() as well as delete() are processing using AsyncRequest. To report 
> about problems we are using RetriesExhaustedWithDetailsException and there is 
> no special handling for TableNotFound exception. So, the final result for 
> running batch or delete operations against not existing table looks really 
> weird and missleading:
> {noformat}
> hbase(main):003:0> delete 't1', 'r1', 'c1'
> 2018-06-12 15:02:50,742 ERROR [main] client.AsyncRequestFutureImpl: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"r1","families":{"c1":[{"qualifier":"","vlen":0,"tag":[],"timestamp":9223372036854775807}]},"ts":9223372036854775807}
> ERROR: Failed 1 action: t1: 1 time, servers with issues: null
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4780) HTable.batch() doesn't handle TableNotFound correctly.

2018-06-12 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created PHOENIX-4780:


 Summary: HTable.batch() doesn't handle TableNotFound correctly.
 Key: PHOENIX-4780
 URL: https://issues.apache.org/jira/browse/PHOENIX-4780
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Sergey Soldatov
Assignee: Sergey Soldatov


batch() as well as delete() are processing using AsyncRequest. To report about 
problems we are using RetriesExhaustedWithDetailsException and there is no 
special handling for TableNotFound exception. So, the final result for running 
batch or delete operations against not existing table looks really weird and 
missleading:
{noformat}
hbase(main):003:0> delete 't1', 'r1', 'c1'
2018-06-12 15:02:50,742 ERROR [main] client.AsyncRequestFutureImpl: Cannot get 
replica 0 location for 
{"totalColumns":1,"row":"r1","families":{"c1":[{"qualifier":"","vlen":0,"tag":[],"timestamp":9223372036854775807}]},"ts":9223372036854775807}

ERROR: Failed 1 action: t1: 1 time, servers with issues: null
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4772) phoenix.sequence.saltBuckets is not honoured.

2018-06-06 Thread Sergey Soldatov (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16503862#comment-16503862
 ] 

Sergey Soldatov commented on PHOENIX-4772:
--

+1

> phoenix.sequence.saltBuckets is not honoured.
> -
>
> Key: PHOENIX-4772
> URL: https://issues.apache.org/jira/browse/PHOENIX-4772
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4772.patch
>
>
> Expectation: First connection should have created 'SYSTEM.SEQUENCE' with salt 
> bucket=10 
> {code:java}
> 
> phoenix.sequence.saltBuckets
> 10
> {code}
> but this property is not getting honoured and table is created with 0 salt 
> bucket.
> 0: jdbc:phoenix:> select SALT_BUCKETS from system.catalog where table_name 
> ='SEQUENCE' and column_name is null;
>  +---+
> |SALT_BUCKETS|
> +---+
> |null|
> +---
> +
> And, honour "phoenix.system.default.keep.deleted.cells" for point in time 
> sequences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4544) Update statistics inconsistent behavior

2018-06-06 Thread Sergey Soldatov (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16503709#comment-16503709
 ] 

Sergey Soldatov commented on PHOENIX-4544:
--

LGTM. 

> Update statistics inconsistent behavior 
> 
>
> Key: PHOENIX-4544
> URL: https://issues.apache.org/jira/browse/PHOENIX-4544
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: Romil Choksi
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-4544.patch
>
>
> Update statistics may not generate the stats information for all dependent 
> indexes. And this behavior may depend on whether the command executed 
> synchronously or asynchronously.
> I have a table GIGANTIC_TABLE with ~500k rows with global index I1 and local 
> index I2.
> If async is turned on (the default value):
> {noformat}
> 0: jdbc:phoenix:> update statistics GIGANTIC_TABLE ALL;
> No rows affected (0.081 seconds)
> 0: jdbc:phoenix:> select count(GUIDE_POSTS_ROW_COUNT) from SYSTEM.STATS WHERE 
> PHYSICAL_NAME='I1' AND COLUMN_FAMILY='0';
> +---+
> | COUNT(GUIDE_POSTS_ROW_COUNT)  |
> +---+
> | 5 |
> +---+
> 1 row selected (0.009 seconds)
> 0: jdbc:phoenix:> select count(GUIDE_POSTS_ROW_COUNT) from SYSTEM.STATS WHERE 
> PHYSICAL_NAME='GIGANTIC_TABLE' AND COLUMN_FAMILY='0';
> +---+
> | COUNT(GUIDE_POSTS_ROW_COUNT)  |
> +---+
> | 520   |
> +---+
> 1 row selected (0.014 seconds)
> 0: jdbc:phoenix:> select count(GUIDE_POSTS_ROW_COUNT) from SYSTEM.STATS WHERE 
> PHYSICAL_NAME='GIGANTIC_TABLE' AND COLUMN_FAMILY='L#0';
> +---+
> | COUNT(GUIDE_POSTS_ROW_COUNT)  |
> +---+
> | 0 |
> +---+
> 1 row selected (0.008 seconds)
> 0: jdbc:phoenix:>
> {noformat}
> As we can see there is no records for local index I2. But if we run 
> statistics for indexes:
> {noformat}
> 0: jdbc:phoenix:> update statistics GIGANTIC_TABLE INDEX;
> No rows affected (0.036 seconds)
> 0: jdbc:phoenix:> select count(GUIDE_POSTS_ROW_COUNT) from SYSTEM.STATS WHERE 
> PHYSICAL_NAME='GIGANTIC_TABLE' AND COLUMN_FAMILY='L#0';
> +---+
> | COUNT(GUIDE_POSTS_ROW_COUNT)  |
> +---+
> | 20|
> +---+
> 1 row selected (0.007 seconds)
> {noformat}
> the statistic for local index is generated correctly.
> Now we turn async off:
> {noformat}
> 0: jdbc:phoenix:> delete from SYSTEM.STATS;
> 547 rows affected (0.079 seconds)
> 0: jdbc:phoenix:> update statistics GIGANTIC_TABLE ALL;
> 999,998 rows affected (4.671 seconds)
> 0: jdbc:phoenix:> select count(GUIDE_POSTS_ROW_COUNT) from SYSTEM.STATS WHERE 
> PHYSICAL_NAME='GIGANTIC_TABLE' AND COLUMN_FAMILY='0';
> +---+
> | COUNT(GUIDE_POSTS_ROW_COUNT)  |
> +---+
> | 520   |
> +---+
> 1 row selected (0.04 seconds)
> 0: jdbc:phoenix:> select count(GUIDE_POSTS_ROW_COUNT) from SYSTEM.STATS WHERE 
> PHYSICAL_NAME='GIGANTIC_TABLE' AND COLUMN_FAMILY='L#0';
> +---+
> | COUNT(GUIDE_POSTS_ROW_COUNT)  |
> +---+
> | 20|
> +---+
> 1 row selected (0.012 seconds)
> 0: jdbc:phoenix:> select count(GUIDE_POSTS_ROW_COUNT) from SYSTEM.STATS WHERE 
> PHYSICAL_NAME='I1' AND COLUMN_FAMILY='0';
> +---+
> | COUNT(GUIDE_POSTS_ROW_COUNT)  |
> +---+
> | 0 |
> +---+
> 1 row selected (0.011 seconds)
> {noformat}
> As we can see we got statistics for the table itself and local index. But not 
> for the global index.
> Moreover, if we try to update statistics for indexes:
> {noformat}
> 0: jdbc:phoenix:> update statistics GIGANTIC_TABLE INDEX;
> 499,999 rows affected (0.332 seconds)
> 0: jdbc:phoenix:> select count(GUIDE_POSTS_ROW_COUNT) from SYSTEM.STATS WHERE 
> PHYSICAL_NAME='I1' AND COLUMN_FAMILY='0';
> +---+
> | COUNT(GUIDE_POSTS_ROW_COUNT)  |
> +---+
> | 0 |
> +---+
> 1 row selected (0.009 seconds)
> {noformat}
> So, still no records for global index.
> But if we delete statistics first and run update for indexes:
> {noformat}
> 0: jdbc:phoenix:> delete from SYSTEM.STATS;
> 541 rows affected (0.024 seconds)
> 0: jdbc:phoenix:> update statistics GIGANTIC_TABLE INDEX;
> 999,998 

[jira] [Updated] (PHOENIX-4759) During restart RS that hosts SYSTEM.CATALOG table may get stuck

2018-05-31 Thread Sergey Soldatov (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4759:
-
Attachment: PHOENIX-4759-2.master.patch

> During restart RS that hosts SYSTEM.CATALOG table may get stuck
> ---
>
> Key: PHOENIX-4759
> URL: https://issues.apache.org/jira/browse/PHOENIX-4759
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Romil Choksi
>Assignee: Sergey Soldatov
>Priority: Blocker
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4759-1.patch, PHOENIX-4759-2.master.patch
>
>
> Sometimes when a cluster has restarted the regions that belong to 
> SYSTEM.CATALOG and other system tables on the same RS may be stuck in RiT. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4759) During restart RS that hosts SYSTEM.CATALOG table may get stuck

2018-05-31 Thread Sergey Soldatov (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16496823#comment-16496823
 ] 

Sergey Soldatov commented on PHOENIX-4759:
--

[~jamestaylor] sounds reasonable. Will do it shortly. 

> During restart RS that hosts SYSTEM.CATALOG table may get stuck
> ---
>
> Key: PHOENIX-4759
> URL: https://issues.apache.org/jira/browse/PHOENIX-4759
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Romil Choksi
>Assignee: Sergey Soldatov
>Priority: Blocker
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4759-1.patch
>
>
> Sometimes when a cluster has restarted the regions that belong to 
> SYSTEM.CATALOG and other system tables on the same RS may be stuck in RiT. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4759) During restart RS that hosts SYSTEM.CATALOG table may get stuck

2018-05-31 Thread Sergey Soldatov (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4759:
-
Attachment: PHOENIX-4759-1.patch

> During restart RS that hosts SYSTEM.CATALOG table may get stuck
> ---
>
> Key: PHOENIX-4759
> URL: https://issues.apache.org/jira/browse/PHOENIX-4759
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Romil Choksi
>Assignee: Sergey Soldatov
>Priority: Blocker
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4759-1.patch
>
>
> Sometimes when a cluster has restarted the regions that belong to 
> SYSTEM.CATALOG and other system tables on the same RS may be stuck in RiT. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4759) During restart RS that hosts SYSTEM.CATALOG table may get stuck

2018-05-30 Thread Sergey Soldatov (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16496183#comment-16496183
 ] 

Sergey Soldatov commented on PHOENIX-4759:
--

That's well reproduced on 5.x branch, but may (should) affect master branch 
under certain circumstances. The SYSTEM.CATALOG region get stuck during open 
operation because there are 2 concurrent open region threads that are trying to 
load system tables. 
Loading sequence:
Thread 1:
MetaDataEndpointImpl -> PhoenixDatabaseMetaData -- (trying to load 
QueryConstants)
Thread 2:
MetaDataRegionObserver -> QueryConstants -> TableProperty -> SQLExceptionCode 
-> (trying to load PhoenixDatabaseMetaData)
Since only one thread is capable to load class and second thread is already 
loading QueryConstants and first thread is loading PhoenixDatabaseMetaData , we 
have a dead lock. 
We can break this by removing the dependency between SQLExceptionCode and 
PhoenixDatabaseMetaData.



> During restart RS that hosts SYSTEM.CATALOG table may get stuck
> ---
>
> Key: PHOENIX-4759
> URL: https://issues.apache.org/jira/browse/PHOENIX-4759
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Romil Choksi
>Assignee: Sergey Soldatov
>Priority: Blocker
> Fix For: 4.14.0, 5.0.0
>
>
> Sometimes when a cluster has restarted the regions that belong to 
> SYSTEM.CATALOG and other system tables on the same RS may be stuck in RiT. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4759) During restart RS that hosts SYSTEM.CATALOG table may get stuck

2018-05-30 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created PHOENIX-4759:


 Summary: During restart RS that hosts SYSTEM.CATALOG table may get 
stuck
 Key: PHOENIX-4759
 URL: https://issues.apache.org/jira/browse/PHOENIX-4759
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0, 5.0.0
Reporter: Romil Choksi
Assignee: Sergey Soldatov
 Fix For: 4.14.0, 5.0.0


Sometimes when a cluster has restarted the regions that belong to 
SYSTEM.CATALOG and other system tables on the same RS may be stuck in RiT. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4758) Guard against HADOOP_CONF_DIR for HiveMapReduceIT

2018-05-29 Thread Sergey Soldatov (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16494225#comment-16494225
 ] 

Sergey Soldatov commented on PHOENIX-4758:
--

Actually, we already logging some messages about HADOOP_CONF_DIR, but fast fail 
looks reasonable. +1

> Guard against HADOOP_CONF_DIR for HiveMapReduceIT
> -
>
> Key: PHOENIX-4758
> URL: https://issues.apache.org/jira/browse/PHOENIX-4758
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 5.0.0, 4.15.0
>
> Attachments: PHOENIX-4758.001.patch
>
>
> I get bitten by this one every time:
> HiveMapReduceIT will likely fail if you have HADOOP_CONF_DIR set in your 
> environment (as something inside of the test starts querying your local 
> installation instead of the minicluster).
> We should add a check for this before the test runs and fail obviously.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4756) Integration tests for PhoenixStorageHandler doesn't work on 5.x branch

2018-05-27 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4756:
-
Attachment: PHOENIX-4756-1.patch

> Integration tests for PhoenixStorageHandler doesn't work on 5.x branch
> --
>
> Key: PHOENIX-4756
> URL: https://issues.apache.org/jira/browse/PHOENIX-4756
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Critical
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4756-1.patch
>
>
> Due to changes in Hive 3.0 and incompatibility between 3rd parties versions 
> tests are broken.
> Following changes are required:
> 1. jetty and netty versions that have Hive, HBase and Hadoop are different.  
> 2. Some of ObjectInspectors (Byte and Double) should use different writables. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4756) Integration tests for PhoenixStorageHandler doesn't work on 5.x branch

2018-05-27 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created PHOENIX-4756:


 Summary: Integration tests for PhoenixStorageHandler doesn't work 
on 5.x branch
 Key: PHOENIX-4756
 URL: https://issues.apache.org/jira/browse/PHOENIX-4756
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0
Reporter: Sergey Soldatov
Assignee: Sergey Soldatov
 Fix For: 5.0.0


Due to changes in Hive 3.0 and incompatibility between 3rd parties versions 
tests are broken.
Following changes are required:
1. jetty and netty versions that have Hive, HBase and Hadoop are different.  
2. Some of ObjectInspectors (Byte and Double) should use different writables. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4747) UDF's integer parameter doens't accept negative constant.

2018-05-21 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483162#comment-16483162
 ] 

Sergey Soldatov commented on PHOENIX-4747:
--

[~jamestaylor] yeah, there is a workaround by using CAST( -1 as INTEGER). 
Possible we can do it automatically at compile time.

> UDF's integer parameter doens't accept negative constant.
> -
>
> Key: PHOENIX-4747
> URL: https://issues.apache.org/jira/browse/PHOENIX-4747
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Priority: Major
> Attachments: PHOENIX-4747-IT.patch
>
>
> If UDF has an integer parameter and we provide a negative constant it fails 
> with 
> {noformat}
> org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
> Type mismatch. expected: [INTEGER] but was: BIGINT at ADDTIME argument 2
>   at 
> org.apache.phoenix.parse.FunctionParseNode.validateFunctionArguement(FunctionParseNode.java:214)
>   at 
> org.apache.phoenix.parse.FunctionParseNode.validate(FunctionParseNode.java:193)
>   at 
> org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:331)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:700)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:585)
>   at 
> org.apache.phoenix.parse.FunctionParseNode.accept(FunctionParseNode.java:86)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:412)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:561)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:507)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:193)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:153)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:490)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:456)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:302)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:283)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1830)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:813)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> {noformat}
> That happens because negative constants are parsed as integer value * -1L, so 
> the result is long. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4534) upsert/delete/upsert for the same row corrupts the indexes

2018-05-21 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4534:
-
Fix Version/s: (was: 4.15.0)
   4.14.0

> upsert/delete/upsert for the same row corrupts the indexes
> --
>
> Key: PHOENIX-4534
> URL: https://issues.apache.org/jira/browse/PHOENIX-4534
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Romil Choksi
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
>  Labels: HBase-2.0
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4534.patch, PHOENIX-4534_v2.patch, 
> PHOENIX-4534_v3.patch
>
>
> If we delete and upsert again the same row, the corresponding index has a 
> null value. 
> {noformat}
> 0: jdbc:phoenix:> create table a (id integer primary key, f float);
> No rows affected (2.272 seconds)
> 0: jdbc:phoenix:> create index i1 on a (f);
> No rows affected (5.769 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.021 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> | 0.5  | 1|
> +--+--+
> 1 row selected (0.016 seconds)
> 0: jdbc:phoenix:> delete from a where id = 1;
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> +--+--+
> No rows selected (0.015 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +---+--+
> |  0:F  | :ID  |
> +---+--+
> | null  | 1|
> +---+--+
> 1 row selected (0.013 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4747) UDF's integer parameter doens't accept negative constant.

2018-05-21 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16483143#comment-16483143
 ] 

Sergey Soldatov commented on PHOENIX-4747:
--

Patch to reproduce the problem using existing integration test.

> UDF's integer parameter doens't accept negative constant.
> -
>
> Key: PHOENIX-4747
> URL: https://issues.apache.org/jira/browse/PHOENIX-4747
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Priority: Major
> Attachments: PHOENIX-4747-IT.patch
>
>
> If UDF has an integer parameter and we provide a negative constant it fails 
> with 
> {noformat}
> org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
> Type mismatch. expected: [INTEGER] but was: BIGINT at ADDTIME argument 2
>   at 
> org.apache.phoenix.parse.FunctionParseNode.validateFunctionArguement(FunctionParseNode.java:214)
>   at 
> org.apache.phoenix.parse.FunctionParseNode.validate(FunctionParseNode.java:193)
>   at 
> org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:331)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:700)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:585)
>   at 
> org.apache.phoenix.parse.FunctionParseNode.accept(FunctionParseNode.java:86)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:412)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:561)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:507)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:193)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:153)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:490)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:456)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:302)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:283)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1830)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:813)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> {noformat}
> That happens because negative constants are parsed as integer value * -1L, so 
> the result is long. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4747) UDF's integer parameter doens't accept negative constant.

2018-05-21 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4747:
-
Attachment: PHOENIX-4747-IT.patch

> UDF's integer parameter doens't accept negative constant.
> -
>
> Key: PHOENIX-4747
> URL: https://issues.apache.org/jira/browse/PHOENIX-4747
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Priority: Major
> Attachments: PHOENIX-4747-IT.patch
>
>
> If UDF has an integer parameter and we provide a negative constant it fails 
> with 
> {noformat}
> org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
> Type mismatch. expected: [INTEGER] but was: BIGINT at ADDTIME argument 2
>   at 
> org.apache.phoenix.parse.FunctionParseNode.validateFunctionArguement(FunctionParseNode.java:214)
>   at 
> org.apache.phoenix.parse.FunctionParseNode.validate(FunctionParseNode.java:193)
>   at 
> org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:331)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:700)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:585)
>   at 
> org.apache.phoenix.parse.FunctionParseNode.accept(FunctionParseNode.java:86)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:412)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:561)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:507)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:193)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:153)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:490)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:456)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:302)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:283)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1830)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:813)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> {noformat}
> That happens because negative constants are parsed as integer value * -1L, so 
> the result is long. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4747) UDF's integer parameter doens't accept negative constant.

2018-05-21 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created PHOENIX-4747:


 Summary: UDF's integer parameter doens't accept negative constant.
 Key: PHOENIX-4747
 URL: https://issues.apache.org/jira/browse/PHOENIX-4747
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0
Reporter: Sergey Soldatov


If UDF has an integer parameter and we provide a negative constant it fails 
with 
{noformat}
org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
Type mismatch. expected: [INTEGER] but was: BIGINT at ADDTIME argument 2
at 
org.apache.phoenix.parse.FunctionParseNode.validateFunctionArguement(FunctionParseNode.java:214)
at 
org.apache.phoenix.parse.FunctionParseNode.validate(FunctionParseNode.java:193)
at 
org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:331)
at 
org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:700)
at 
org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:585)
at 
org.apache.phoenix.parse.FunctionParseNode.accept(FunctionParseNode.java:86)
at 
org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:412)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:561)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:507)
at 
org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:193)
at 
org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:153)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:490)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:456)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:302)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:283)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1830)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
{noformat}
That happens because negative constants are parsed as integer value * -1L, so 
the result is long. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4692) ArrayIndexOutOfBoundsException in ScanRanges.intersectScan

2018-05-18 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16481038#comment-16481038
 ] 

Sergey Soldatov commented on PHOENIX-4692:
--

+1 as well. Thank you very much, [~maryannxue]

> ArrayIndexOutOfBoundsException in ScanRanges.intersectScan
> --
>
> Key: PHOENIX-4692
> URL: https://issues.apache.org/jira/browse/PHOENIX-4692
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4692-IT.patch, PHOENIX-4692_v1.patch, 
> PHOENIX-4692_v2.patch
>
>
> ScanRanges.intersectScan may fail with AIOOBE if a salted table is used.
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at org.apache.phoenix.util.ScanUtil.getKey(ScanUtil.java:333)
>   at org.apache.phoenix.util.ScanUtil.getMinKey(ScanUtil.java:317)
>   at 
> org.apache.phoenix.compile.ScanRanges.intersectScan(ScanRanges.java:371)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:1074)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:631)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:501)
>   at 
> org.apache.phoenix.iterate.ParallelIterators.(ParallelIterators.java:62)
>   at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:274)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:364)
>   at 
> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:234)
>   at 
> org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:144)
>   at 
> org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:139)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:314)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:293)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:292)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:285)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1798)
> {noformat}
> Script to reproduce:
> {noformat}
> CREATE TABLE TEST (PK1 INTEGER NOT NULL, PK2 INTEGER NOT NULL,  ID1 INTEGER, 
> ID2 INTEGER CONSTRAINT PK PRIMARY KEY(PK1 , PK2))SALT_BUCKETS = 4;
> upsert into test values (1,1,1,1);
> upsert into test values (2,2,2,2);
> upsert into test values (2,3,1,2);
> create view TEST_VIEW as select * from TEST where PK1 in (1,2);
> CREATE INDEX IDX_VIEW ON TEST_VIEW (ID1);
>   select /*+ INDEX(TEST_VIEW IDX_VIEW) */ * from TEST_VIEW where ID1 = 1  
> ORDER BY ID2 LIMIT 500 OFFSET 0;
> {noformat}
> That happens because we have a point lookup optimization which reduces 
> RowKeySchema to a single field, while we have more than one slot due salting. 
> [~jamestaylor] can you please take a look? I'm not sure whether it should be 
> fixed on the ScanUtil level or we just should not use point lookup in such 
> cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4742) Prevent infinite loop with HBase 1.4 and DistinctPrefixFilter

2018-05-17 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479828#comment-16479828
 ] 

Sergey Soldatov commented on PHOENIX-4742:
--

That happens because of the changes in flters in HBase 1.4 and 2.0. Somehow the 
matcher instead of SEEK_NEXT_ROW keeps returning SEEK_NEXT_USING_HINT, so we 
are getting stuck at the last cell. let me dig a bit.  

> Prevent infinite loop with HBase 1.4 and DistinctPrefixFilter
> -
>
> Key: PHOENIX-4742
> URL: https://issues.apache.org/jira/browse/PHOENIX-4742
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Sergey Soldatov
>Priority: Major
>
> OrderByIT.testOrderByReverseOptimizationWithNUllsLastBug3491 is the only test 
> failing on master (i.e. HBase 1.4). It's getting into an infinite loop when a 
> reverse scan is done for the DistinctPrefixFilter. It'd be nice to fix this 
> so we can do a release for HBase 1.4. At a minimum, we could disable 
> DistinctPrefixFilter when a reverse scan is being done (for HBase 1.4 only).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4742) Prevent infinite loop with HBase 1.4 and DistinctPrefixFilter

2018-05-17 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov reassigned PHOENIX-4742:


Assignee: Sergey Soldatov

> Prevent infinite loop with HBase 1.4 and DistinctPrefixFilter
> -
>
> Key: PHOENIX-4742
> URL: https://issues.apache.org/jira/browse/PHOENIX-4742
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Sergey Soldatov
>Priority: Major
>
> OrderByIT.testOrderByReverseOptimizationWithNUllsLastBug3491 is the only test 
> failing on master (i.e. HBase 1.4). It's getting into an infinite loop when a 
> reverse scan is done for the DistinctPrefixFilter. It'd be nice to fix this 
> so we can do a release for HBase 1.4. At a minimum, we could disable 
> DistinctPrefixFilter when a reverse scan is being done (for HBase 1.4 only).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4692) ArrayIndexOutOfBoundsException in ScanRanges.intersectScan

2018-05-17 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479617#comment-16479617
 ] 

Sergey Soldatov commented on PHOENIX-4692:
--

[~jamestaylor] +1 if there is no other way to avoid adding SkipScanFilter more 
than once. 

> ArrayIndexOutOfBoundsException in ScanRanges.intersectScan
> --
>
> Key: PHOENIX-4692
> URL: https://issues.apache.org/jira/browse/PHOENIX-4692
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4692-IT.patch, PHOENIX-4692_v1.patch
>
>
> ScanRanges.intersectScan may fail with AIOOBE if a salted table is used.
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at org.apache.phoenix.util.ScanUtil.getKey(ScanUtil.java:333)
>   at org.apache.phoenix.util.ScanUtil.getMinKey(ScanUtil.java:317)
>   at 
> org.apache.phoenix.compile.ScanRanges.intersectScan(ScanRanges.java:371)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:1074)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:631)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:501)
>   at 
> org.apache.phoenix.iterate.ParallelIterators.(ParallelIterators.java:62)
>   at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:274)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:364)
>   at 
> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:234)
>   at 
> org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:144)
>   at 
> org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:139)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:314)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:293)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:292)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:285)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1798)
> {noformat}
> Script to reproduce:
> {noformat}
> CREATE TABLE TEST (PK1 INTEGER NOT NULL, PK2 INTEGER NOT NULL,  ID1 INTEGER, 
> ID2 INTEGER CONSTRAINT PK PRIMARY KEY(PK1 , PK2))SALT_BUCKETS = 4;
> upsert into test values (1,1,1,1);
> upsert into test values (2,2,2,2);
> upsert into test values (2,3,1,2);
> create view TEST_VIEW as select * from TEST where PK1 in (1,2);
> CREATE INDEX IDX_VIEW ON TEST_VIEW (ID1);
>   select /*+ INDEX(TEST_VIEW IDX_VIEW) */ * from TEST_VIEW where ID1 = 1  
> ORDER BY ID2 LIMIT 500 OFFSET 0;
> {noformat}
> That happens because we have a point lookup optimization which reduces 
> RowKeySchema to a single field, while we have more than one slot due salting. 
> [~jamestaylor] can you please take a look? I'm not sure whether it should be 
> fixed on the ScanUtil level or we just should not use point lookup in such 
> cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3163) Split during global index creation may cause ERROR 201 error

2018-05-10 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16471003#comment-16471003
 ] 

Sergey Soldatov commented on PHOENIX-3163:
--

LGTM +1

> Split during global index creation may cause ERROR 201 error
> 
>
> Key: PHOENIX-3163
> URL: https://issues.apache.org/jira/browse/PHOENIX-3163
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-3163_v1.patch, PHOENIX-3163_v3.patch, 
> PHOENIX-3163_v4.patch, PHOENIX-3163_v5.patch, PHOENIX-3163_v6.patch
>
>
> When we create global index and split happen meanwhile there is a chance to 
> fail with ERROR 201:
> {noformat}
> 2016-08-08 15:55:17,248 INFO  [Thread-6] 
> org.apache.phoenix.iterate.BaseResultIterators(878): Failed to execute task 
> during cancel
> java.util.concurrent.ExecutionException: java.sql.SQLException: ERROR 201 
> (22000): Illegal data.
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.close(BaseResultIterators.java:872)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:809)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:713)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$2.execute(UpsertCompiler.java:815)
>   at 
> org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:124)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:2823)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1079)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1382)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:343)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:330)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1440)
>   at 
> org.apache.phoenix.hbase.index.write.TestIndexWriter$1.run(TestIndexWriter.java:93)
> Caused by: java.sql.SQLException: ERROR 201 (22000): Illegal data.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:441)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>   at 
> org.apache.phoenix.schema.types.PDataType.newIllegalDataException(PDataType.java:287)
>   at 
> org.apache.phoenix.schema.types.PUnsignedSmallint$UnsignedShortCodec.decodeShort(PUnsignedSmallint.java:146)
>   at 
> org.apache.phoenix.schema.types.PSmallint.toObject(PSmallint.java:104)
>   at org.apache.phoenix.schema.types.PSmallint.toObject(PSmallint.java:28)
>   at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:980)
>   at 
> org.apache.phoenix.schema.types.PUnsignedSmallint.toObject(PUnsignedSmallint.java:102)
>   at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:980)
>   at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:992)
>   at 
> org.apache.phoenix.schema.types.PDataType.coerceBytes(PDataType.java:830)
>   at 
> org.apache.phoenix.schema.types.PDecimal.coerceBytes(PDecimal.java:342)
>   at 
> org.apache.phoenix.schema.types.PDataType.coerceBytes(PDataType.java:810)
>   at 
> org.apache.phoenix.expression.CoerceExpression.evaluate(CoerceExpression.java:149)
>   at 
> org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.getBytes(PhoenixResultSet.java:308)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.upsertSelect(UpsertCompiler.java:197)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.access$000(UpsertCompiler.java:115)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertingParallelIteratorFactory.mutate(UpsertCompi

[jira] [Commented] (PHOENIX-4534) upsert/delete/upsert for the same row corrupts the indexes

2018-05-10 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16470796#comment-16470796
 ] 

Sergey Soldatov commented on PHOENIX-4534:
--

[~jamestaylor] That's 1.4/2.0 HBase specific. Not required for other branches.

> upsert/delete/upsert for the same row corrupts the indexes
> --
>
> Key: PHOENIX-4534
> URL: https://issues.apache.org/jira/browse/PHOENIX-4534
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Romil Choksi
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
>  Labels: HBase-2.0
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4534.patch, PHOENIX-4534_v2.patch, 
> PHOENIX-4534_v3.patch
>
>
> If we delete and upsert again the same row, the corresponding index has a 
> null value. 
> {noformat}
> 0: jdbc:phoenix:> create table a (id integer primary key, f float);
> No rows affected (2.272 seconds)
> 0: jdbc:phoenix:> create index i1 on a (f);
> No rows affected (5.769 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.021 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> | 0.5  | 1|
> +--+--+
> 1 row selected (0.016 seconds)
> 0: jdbc:phoenix:> delete from a where id = 1;
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> +--+--+
> No rows selected (0.015 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +---+--+
> |  0:F  | :ID  |
> +---+--+
> | null  | 1|
> +---+--+
> 1 row selected (0.013 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4534) upsert/delete/upsert for the same row corrupts the indexes

2018-05-10 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16470757#comment-16470757
 ] 

Sergey Soldatov commented on PHOENIX-4534:
--

[~jamestaylor] cherry-picked to master. Thanks for tracking it!

> upsert/delete/upsert for the same row corrupts the indexes
> --
>
> Key: PHOENIX-4534
> URL: https://issues.apache.org/jira/browse/PHOENIX-4534
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Romil Choksi
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
>  Labels: HBase-2.0
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4534.patch, PHOENIX-4534_v2.patch, 
> PHOENIX-4534_v3.patch
>
>
> If we delete and upsert again the same row, the corresponding index has a 
> null value. 
> {noformat}
> 0: jdbc:phoenix:> create table a (id integer primary key, f float);
> No rows affected (2.272 seconds)
> 0: jdbc:phoenix:> create index i1 on a (f);
> No rows affected (5.769 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.021 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> | 0.5  | 1|
> +--+--+
> 1 row selected (0.016 seconds)
> 0: jdbc:phoenix:> delete from a where id = 1;
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> +--+--+
> No rows selected (0.015 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +---+--+
> |  0:F  | :ID  |
> +---+--+
> | null  | 1|
> +---+--+
> 1 row selected (0.013 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4646) The data exceeds the max capacity for the data type error for valid scenarios.

2018-05-09 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469518#comment-16469518
 ] 

Sergey Soldatov edited comment on PHOENIX-4646 at 5/9/18 9:15 PM:
--

[~jamestaylor] thanks. LGTM. Sorry for the late response, just got back from a 
vacation. 


was (Author: sergey.soldatov):
[~jamestaylor] thanks. LGTM. Sorry for the late response, just got back from a 
vacation. [~apurtell] if you don't mind I'd commit it to 4.14

> The data exceeds the max capacity for the data type error for valid scenarios.
> --
>
> Key: PHOENIX-4646
> URL: https://issues.apache.org/jira/browse/PHOENIX-4646
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4646.patch, PHOENIX-4646_v2.patch
>
>
> Here is an example:
> {noformat}
> create table test_trim_source(name varchar(160) primary key, id varchar(120), 
> address varchar(160)); 
> create table test_trim_target(name varchar(160) primary key, id varchar(10), 
> address 
>  varchar(10));
> upsert into test_trim_source values('test','test','test');
> upsert into test_trim_target select * from test_trim_source;
> {noformat}
> It fails with 
> {noformat}
> Error: ERROR 206 (22003): The data exceeds the max capacity for the data 
> type. value='test' columnName=ID (state=22003,code=206)
> java.sql.SQLException: ERROR 206 (22003): The data exceeds the max capacity 
> for the data type. value='test' columnName=ID
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:165)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:149)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:116)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1261)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1203)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$ClientUpsertSelectMutationPlan.execute(UpsertCompiler.java:1300)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:368)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1794)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:813)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: java.sql.SQLException: ERROR 206 (22003): The data exceeds the max 
> capacity for the data type. value='test' columnName=ID
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.upsertSelect(UpsertCompiler.java:235)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertingParallelIteratorFactory.mutate(UpsertCompiler.java:284)
>   at 
> org.apache.phoenix.compile.MutatingParallelIteratorFactory.newIterator(MutatingParallelIteratorFactory.java:59)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:113)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:

[jira] [Commented] (PHOENIX-4646) The data exceeds the max capacity for the data type error for valid scenarios.

2018-05-09 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469518#comment-16469518
 ] 

Sergey Soldatov commented on PHOENIX-4646:
--

[~jamestaylor] thanks. LGTM. Sorry for the late response, just got back from a 
vacation. [~apurtell] if you don't mind I'd commit it to 4.14

> The data exceeds the max capacity for the data type error for valid scenarios.
> --
>
> Key: PHOENIX-4646
> URL: https://issues.apache.org/jira/browse/PHOENIX-4646
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4646.patch, PHOENIX-4646_v2.patch
>
>
> Here is an example:
> {noformat}
> create table test_trim_source(name varchar(160) primary key, id varchar(120), 
> address varchar(160)); 
> create table test_trim_target(name varchar(160) primary key, id varchar(10), 
> address 
>  varchar(10));
> upsert into test_trim_source values('test','test','test');
> upsert into test_trim_target select * from test_trim_source;
> {noformat}
> It fails with 
> {noformat}
> Error: ERROR 206 (22003): The data exceeds the max capacity for the data 
> type. value='test' columnName=ID (state=22003,code=206)
> java.sql.SQLException: ERROR 206 (22003): The data exceeds the max capacity 
> for the data type. value='test' columnName=ID
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:165)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:149)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:116)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1261)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1203)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$ClientUpsertSelectMutationPlan.execute(UpsertCompiler.java:1300)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:368)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1794)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:813)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: java.sql.SQLException: ERROR 206 (22003): The data exceeds the max 
> capacity for the data type. value='test' columnName=ID
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.upsertSelect(UpsertCompiler.java:235)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertingParallelIteratorFactory.mutate(UpsertCompiler.java:284)
>   at 
> org.apache.phoenix.compile.MutatingParallelIteratorFactory.newIterator(MutatingParallelIteratorFactory.java:59)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:113)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat} 
> The problem is that in PVarchar.isSizeCompatible we ignore the length of the 
> value if the source has specified max size for the value. 



--
This 

[jira] [Commented] (PHOENIX-4733) NPE while running sql through file using psql

2018-05-09 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16469491#comment-16469491
 ] 

Sergey Soldatov commented on PHOENIX-4733:
--

LGTM +1

> NPE while running sql through file using psql
> -
>
> Key: PHOENIX-4733
> URL: https://issues.apache.org/jira/browse/PHOENIX-4733
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Srikanth Janardhan
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4733.patch
>
>
> {code:java}
> cat /tmp/test.sql
> CREATE TABLE IF NOT EXISTS QETEST (ID INTEGER NOT NULL PRIMARY KEY, A 
> VARCHAR, B INTEGER);
> upsert into QETEST VALUES(1,'A',10);
> upsert into QETEST VALUES(2,'B',1000);
> upsert into QETEST VALUES(3,'A',20);
> upsert into QETEST VALUES(4,'A',100);
> upsert into QETEST VALUES(5,'B',9000);
> SELECT A||'_GROUP' AS GRP,SUM(B)||'_RESULT' AS SUM FROM QETEST GROUP BY A;
> DROP TABLE QETEST;{code}
> bin/psql.py localhost /tmp/test.sql
> {code:java}
> no rows upserted
> Time: 0.858 sec(s)
> 1 row upserted
> Time: 0.04 sec(s)
> 1 row upserted
> Time: 0.004 sec(s)
> 1 row upserted
> Time: 0.006 sec(s)
> 1 row upserted
> Time: 0.004 sec(s)
> 1 row upserted
> Time: 0.004 sec(s)
> java.lang.NullPointerException: null value in entry: QUERY_I=null
> at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:235)
> at com.google.common.collect.ImmutableMap.entryOf(ImmutableMap.java:144)
> at com.google.common.collect.ImmutableMap$Builder.put(ImmutableMap.java:182)
> at 
> org.apache.phoenix.log.QueryLoggerUtil.getInitialDetails(QueryLoggerUtil.java:50)
> at 
> org.apache.phoenix.log.QueryLoggerUtil.logInitialDetails(QueryLoggerUtil.java:36)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.createQueryLogger(PhoenixStatement.java:1783)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:176)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:468)
> at 
> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:348)
> at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:295){code}
> FYI [~jamestaylor] , if you see it a blocker for 4.14.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4645) PhoenixStorageHandler doesn't handle correctly data/timestamp in push down predicate when engine is tez.

2018-04-20 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16446410#comment-16446410
 ] 

Sergey Soldatov commented on PHOENIX-4645:
--

Added test as we as fixed issue that we were unable to handle timestamps with 
more than 3 digits for nanoseconds in timestamp (adjusted to the phoenix 
compatible 9). [~elserj] [~rajeshbabu] could you guys take a look. changes are 
obvious. 

> PhoenixStorageHandler doesn't handle correctly data/timestamp in push down 
> predicate when engine is tez. 
> -
>
> Key: PHOENIX-4645
> URL: https://issues.apache.org/jira/browse/PHOENIX-4645
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
>  Labels: HivePhoenix
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4645-wip.patch, PHOENIX-4645.patch
>
>
> DDLs:
> {noformat}
> CREATE TABLE TEST_PHOENIX
> (
> PART_ID BIGINT NOT NULL,
> COMMIT_TIMESTAMP TIMESTAMP,
> CONSTRAINT pk PRIMARY KEY (PART_ID)
> )
> SALT_BUCKETS=9;
> CREATE EXTERNAL TABLE TEST_HIVE
> (
> PART_ID BIGINT,
> SOURCEDB_COMMIT_TIMESTAMP TIMESTAMP
> )
> STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
> TBLPROPERTIES
> (
> "phoenix.table.name" = "TEST_PHOENIX",
> "phoenix.zookeeper.quorum" = "localhost",
> "phoenix.zookeeper.znode.parent" = "/hbase",
> "phoenix.zookeeper.client.port" = "2181",
> "phoenix.rowkeys" = "PART_ID",
> "phoenix.column.mapping" = 
> "part_id:PART_ID,sourcedb_commit_timestamp:COMMIT_TIMESTAMP"
> );
> {noformat}
> Query :
> {noformat}
> hive> select * from TEST_HIVE2 where sourcedb_commit_timestamp between 
> '2018-03-01 01:00:00.000' and  '2018-03-20 01:00:00.000';
> OK
> Failed with exception java.io.IOException:java.lang.RuntimeException: 
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. TIMESTAMP and VARCHAR for "sourcedb_commit_timestamp" >= 
> '2018-03-01 01:00:00.000'
> {noformat}
> That happens because we don't use mapped column name when we check whether we 
> need to apply to_timestamp/to_date function. For the default mapping, we 
> regexp patterns don't take into account that column name is double quoted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4645) PhoenixStorageHandler doesn't handle correctly data/timestamp in push down predicate when engine is tez.

2018-04-20 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4645:
-
Attachment: PHOENIX-4645.patch

> PhoenixStorageHandler doesn't handle correctly data/timestamp in push down 
> predicate when engine is tez. 
> -
>
> Key: PHOENIX-4645
> URL: https://issues.apache.org/jira/browse/PHOENIX-4645
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
>  Labels: HivePhoenix
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4645-wip.patch, PHOENIX-4645.patch
>
>
> DDLs:
> {noformat}
> CREATE TABLE TEST_PHOENIX
> (
> PART_ID BIGINT NOT NULL,
> COMMIT_TIMESTAMP TIMESTAMP,
> CONSTRAINT pk PRIMARY KEY (PART_ID)
> )
> SALT_BUCKETS=9;
> CREATE EXTERNAL TABLE TEST_HIVE
> (
> PART_ID BIGINT,
> SOURCEDB_COMMIT_TIMESTAMP TIMESTAMP
> )
> STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
> TBLPROPERTIES
> (
> "phoenix.table.name" = "TEST_PHOENIX",
> "phoenix.zookeeper.quorum" = "localhost",
> "phoenix.zookeeper.znode.parent" = "/hbase",
> "phoenix.zookeeper.client.port" = "2181",
> "phoenix.rowkeys" = "PART_ID",
> "phoenix.column.mapping" = 
> "part_id:PART_ID,sourcedb_commit_timestamp:COMMIT_TIMESTAMP"
> );
> {noformat}
> Query :
> {noformat}
> hive> select * from TEST_HIVE2 where sourcedb_commit_timestamp between 
> '2018-03-01 01:00:00.000' and  '2018-03-20 01:00:00.000';
> OK
> Failed with exception java.io.IOException:java.lang.RuntimeException: 
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. TIMESTAMP and VARCHAR for "sourcedb_commit_timestamp" >= 
> '2018-03-01 01:00:00.000'
> {noformat}
> That happens because we don't use mapped column name when we check whether we 
> need to apply to_timestamp/to_date function. For the default mapping, we 
> regexp patterns don't take into account that column name is double quoted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4692) ArrayIndexOutOfBoundsException in ScanRanges.intersectScan

2018-04-18 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16442015#comment-16442015
 ] 

Sergey Soldatov commented on PHOENIX-4692:
--

Attached a patch for IT test to reproduce the problem.

> ArrayIndexOutOfBoundsException in ScanRanges.intersectScan
> --
>
> Key: PHOENIX-4692
> URL: https://issues.apache.org/jira/browse/PHOENIX-4692
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4692-IT.patch
>
>
> ScanRanges.intersectScan may fail with AIOOBE if a salted table is used.
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at org.apache.phoenix.util.ScanUtil.getKey(ScanUtil.java:333)
>   at org.apache.phoenix.util.ScanUtil.getMinKey(ScanUtil.java:317)
>   at 
> org.apache.phoenix.compile.ScanRanges.intersectScan(ScanRanges.java:371)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:1074)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:631)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:501)
>   at 
> org.apache.phoenix.iterate.ParallelIterators.(ParallelIterators.java:62)
>   at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:274)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:364)
>   at 
> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:234)
>   at 
> org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:144)
>   at 
> org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:139)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:314)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:293)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:292)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:285)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1798)
> {noformat}
> Script to reproduce:
> {noformat}
> CREATE TABLE TEST (PK1 INTEGER NOT NULL, PK2 INTEGER NOT NULL,  ID1 INTEGER, 
> ID2 INTEGER CONSTRAINT PK PRIMARY KEY(PK1 , PK2))SALT_BUCKETS = 4;
> upsert into test values (1,1,1,1);
> upsert into test values (2,2,2,2);
> upsert into test values (2,3,1,2);
> create view TEST_VIEW as select * from TEST where PK1 in (1,2);
> CREATE INDEX IDX_VIEW ON TEST_VIEW (ID1);
>   select /*+ INDEX(TEST_VIEW IDX_VIEW) */ * from TEST_VIEW where ID1 = 1  
> ORDER BY ID2 LIMIT 500 OFFSET 0;
> {noformat}
> That happens because we have a point lookup optimization which reduces 
> RowKeySchema to a single field, while we have more than one slot due salting. 
> [~jamestaylor] can you please take a look? I'm not sure whether it should be 
> fixed on the ScanUtil level or we just should not use point lookup in such 
> cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4692) ArrayIndexOutOfBoundsException in ScanRanges.intersectScan

2018-04-18 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4692:
-
Attachment: PHOENIX-4692-IT.patch

> ArrayIndexOutOfBoundsException in ScanRanges.intersectScan
> --
>
> Key: PHOENIX-4692
> URL: https://issues.apache.org/jira/browse/PHOENIX-4692
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4692-IT.patch
>
>
> ScanRanges.intersectScan may fail with AIOOBE if a salted table is used.
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at org.apache.phoenix.util.ScanUtil.getKey(ScanUtil.java:333)
>   at org.apache.phoenix.util.ScanUtil.getMinKey(ScanUtil.java:317)
>   at 
> org.apache.phoenix.compile.ScanRanges.intersectScan(ScanRanges.java:371)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:1074)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:631)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:501)
>   at 
> org.apache.phoenix.iterate.ParallelIterators.(ParallelIterators.java:62)
>   at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:274)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:364)
>   at 
> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:234)
>   at 
> org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:144)
>   at 
> org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:139)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:314)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:293)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:292)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:285)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1798)
> {noformat}
> Script to reproduce:
> {noformat}
> CREATE TABLE TEST (PK1 INTEGER NOT NULL, PK2 INTEGER NOT NULL,  ID1 INTEGER, 
> ID2 INTEGER CONSTRAINT PK PRIMARY KEY(PK1 , PK2))SALT_BUCKETS = 4;
> upsert into test values (1,1,1,1);
> upsert into test values (2,2,2,2);
> upsert into test values (2,3,1,2);
> create view TEST_VIEW as select * from TEST where PK1 in (1,2);
> CREATE INDEX IDX_VIEW ON TEST_VIEW (ID1);
>   select /*+ INDEX(TEST_VIEW IDX_VIEW) */ * from TEST_VIEW where ID1 = 1  
> ORDER BY ID2 LIMIT 500 OFFSET 0;
> {noformat}
> That happens because we have a point lookup optimization which reduces 
> RowKeySchema to a single field, while we have more than one slot due salting. 
> [~jamestaylor] can you please take a look? I'm not sure whether it should be 
> fixed on the ScanUtil level or we just should not use point lookup in such 
> cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4692) ArrayIndexOutOfBoundsException in ScanRanges.intersectScan

2018-04-18 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created PHOENIX-4692:


 Summary: ArrayIndexOutOfBoundsException in ScanRanges.intersectScan
 Key: PHOENIX-4692
 URL: https://issues.apache.org/jira/browse/PHOENIX-4692
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0
Reporter: Sergey Soldatov
 Fix For: 4.14.0


ScanRanges.intersectScan may fail with AIOOBE if a salted table is used.
{noformat}
java.lang.ArrayIndexOutOfBoundsException: 1

at org.apache.phoenix.util.ScanUtil.getKey(ScanUtil.java:333)
at org.apache.phoenix.util.ScanUtil.getMinKey(ScanUtil.java:317)
at 
org.apache.phoenix.compile.ScanRanges.intersectScan(ScanRanges.java:371)
at 
org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:1074)
at 
org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:631)
at 
org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:501)
at 
org.apache.phoenix.iterate.ParallelIterators.(ParallelIterators.java:62)
at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:274)
at 
org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:364)
at 
org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:234)
at 
org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:144)
at 
org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:139)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:314)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:293)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:292)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:285)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1798)
{noformat}
Script to reproduce:
{noformat}
CREATE TABLE TEST (PK1 INTEGER NOT NULL, PK2 INTEGER NOT NULL,  ID1 INTEGER, 
ID2 INTEGER CONSTRAINT PK PRIMARY KEY(PK1 , PK2))SALT_BUCKETS = 4;
upsert into test values (1,1,1,1);
upsert into test values (2,2,2,2);
upsert into test values (2,3,1,2);

create view TEST_VIEW as select * from TEST where PK1 in (1,2);
CREATE INDEX IDX_VIEW ON TEST_VIEW (ID1);


  select /*+ INDEX(TEST_VIEW IDX_VIEW) */ * from TEST_VIEW where ID1 = 1  ORDER 
BY ID2 LIMIT 500 OFFSET 0;
{noformat}

That happens because we have a point lookup optimization which reduces 
RowKeySchema to a single field, while we have more than one slot due salting. 
[~jamestaylor] can you please take a look? I'm not sure whether it should be 
fixed on the ScanUtil level or we just should not use point lookup in such 
cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4496) Fix RowValueConstructorIT and IndexMetadataIT

2018-04-11 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434120#comment-16434120
 ] 

Sergey Soldatov commented on PHOENIX-4496:
--

+1 I think that it's safe to commit it to all branches and would make the 
behavior of our filters the same for all HBase versions. 

> Fix RowValueConstructorIT and IndexMetadataIT
> -
>
> Key: PHOENIX-4496
> URL: https://issues.apache.org/jira/browse/PHOENIX-4496
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.14.0
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Ankit Singhal
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4496.patch
>
>
> {noformat}
> [ERROR] Tests run: 46, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 117.444 s <<< FAILURE! - in org.apache.phoenix.end2end.RowValueConstructorIT
> [ERROR] 
> testRVCLastPkIsTable1stPkIndex(org.apache.phoenix.end2end.RowValueConstructorIT)
>   Time elapsed: 4.516 s  <<< FAILURE!
> java.lang.AssertionError
> at 
> org.apache.phoenix.end2end.RowValueConstructorIT.testRVCLastPkIsTable1stPkIndex(RowValueConstructorIT.java:1584)
> {noformat}
> {noformat}
> ERROR] Tests run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 79.381 s <<< FAILURE! - in org.apache.phoenix.end2end.index.IndexMetadataIT
> [ERROR] 
> testMutableTableOnlyHasPrimaryKeyIndex(org.apache.phoenix.end2end.index.IndexMetadataIT)
>   Time elapsed: 4.504 s  <<< FAILURE!
> java.lang.AssertionError
> at 
> org.apache.phoenix.end2end.index.IndexMetadataIT.helpTestTableOnlyHasPrimaryKeyIndex(IndexMetadataIT.java:662)
> at 
> org.apache.phoenix.end2end.index.IndexMetadataIT.testMutableTableOnlyHasPrimaryKeyIndex(IndexMetadataIT.java:623)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4687) index ScannerBuilder builds filter list not compatible with HBase 1.4

2018-04-11 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov resolved PHOENIX-4687.
--
Resolution: Duplicate

> index ScannerBuilder builds filter list not compatible with HBase 1.4
> -
>
> Key: PHOENIX-4687
> URL: https://issues.apache.org/jira/browse/PHOENIX-4687
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergey Soldatov
>Priority: Critical
>
> This is related to IndexMetadataIT#testMutableTableOnlyHasPrimaryKeyIndex 
> failure on the master branch. 
> in ScannerBuilder we are building the list of filters for indexes:
> 1. columns filter
> 2. timestamp filter
> 3. delete tracker
> Later to produce the scanner we are running a kv scanner with this filter and 
> returns EmptyScanner or CoveredDeleteScanner basing on the result. 
> In case if columns list is empty, the behavior of the filter's 
> filterAllRemaining()  has been changed.
> In 1.3 it returns true, in 1.4 it returns false and let FilterList to proceed 
> to the next filter.  As the result instead of getting EmptyScanner we are 
> building CoveredDeleteScanner. 
> We may unify the case and in case of an empty list of columns we may 
> explicitly return EmptyScanner. Easy patch, but want some confirmation that 
> getting an empty column list in this method is by design and root cause is 
> not somewhere else. 
> FYI [~rajeshbabu], [~elserj], [~jamestaylor]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4496) Fix RowValueConstructorIT and IndexMetadataIT

2018-04-11 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4496:
-
Fix Version/s: 4.14.0

> Fix RowValueConstructorIT and IndexMetadataIT
> -
>
> Key: PHOENIX-4496
> URL: https://issues.apache.org/jira/browse/PHOENIX-4496
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.14.0
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Ankit Singhal
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4496.patch
>
>
> {noformat}
> [ERROR] Tests run: 46, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 117.444 s <<< FAILURE! - in org.apache.phoenix.end2end.RowValueConstructorIT
> [ERROR] 
> testRVCLastPkIsTable1stPkIndex(org.apache.phoenix.end2end.RowValueConstructorIT)
>   Time elapsed: 4.516 s  <<< FAILURE!
> java.lang.AssertionError
> at 
> org.apache.phoenix.end2end.RowValueConstructorIT.testRVCLastPkIsTable1stPkIndex(RowValueConstructorIT.java:1584)
> {noformat}
> {noformat}
> ERROR] Tests run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 79.381 s <<< FAILURE! - in org.apache.phoenix.end2end.index.IndexMetadataIT
> [ERROR] 
> testMutableTableOnlyHasPrimaryKeyIndex(org.apache.phoenix.end2end.index.IndexMetadataIT)
>   Time elapsed: 4.504 s  <<< FAILURE!
> java.lang.AssertionError
> at 
> org.apache.phoenix.end2end.index.IndexMetadataIT.helpTestTableOnlyHasPrimaryKeyIndex(IndexMetadataIT.java:662)
> at 
> org.apache.phoenix.end2end.index.IndexMetadataIT.testMutableTableOnlyHasPrimaryKeyIndex(IndexMetadataIT.java:623)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4496) Fix RowValueConstructorIT and IndexMetadataIT

2018-04-11 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4496:
-
Affects Version/s: 4.14.0

> Fix RowValueConstructorIT and IndexMetadataIT
> -
>
> Key: PHOENIX-4496
> URL: https://issues.apache.org/jira/browse/PHOENIX-4496
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.14.0
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Ankit Singhal
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4496.patch
>
>
> {noformat}
> [ERROR] Tests run: 46, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 117.444 s <<< FAILURE! - in org.apache.phoenix.end2end.RowValueConstructorIT
> [ERROR] 
> testRVCLastPkIsTable1stPkIndex(org.apache.phoenix.end2end.RowValueConstructorIT)
>   Time elapsed: 4.516 s  <<< FAILURE!
> java.lang.AssertionError
> at 
> org.apache.phoenix.end2end.RowValueConstructorIT.testRVCLastPkIsTable1stPkIndex(RowValueConstructorIT.java:1584)
> {noformat}
> {noformat}
> ERROR] Tests run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 79.381 s <<< FAILURE! - in org.apache.phoenix.end2end.index.IndexMetadataIT
> [ERROR] 
> testMutableTableOnlyHasPrimaryKeyIndex(org.apache.phoenix.end2end.index.IndexMetadataIT)
>   Time elapsed: 4.504 s  <<< FAILURE!
> java.lang.AssertionError
> at 
> org.apache.phoenix.end2end.index.IndexMetadataIT.helpTestTableOnlyHasPrimaryKeyIndex(IndexMetadataIT.java:662)
> at 
> org.apache.phoenix.end2end.index.IndexMetadataIT.testMutableTableOnlyHasPrimaryKeyIndex(IndexMetadataIT.java:623)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4687) index ScannerBuilder builds filter list not compatible with HBase 1.4

2018-04-10 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created PHOENIX-4687:


 Summary: index ScannerBuilder builds filter list not compatible 
with HBase 1.4
 Key: PHOENIX-4687
 URL: https://issues.apache.org/jira/browse/PHOENIX-4687
 Project: Phoenix
  Issue Type: Bug
Reporter: Sergey Soldatov


This is related to IndexMetadataIT#testMutableTableOnlyHasPrimaryKeyIndex 
failure on the master branch. 
in ScannerBuilder we are building the list of filters for indexes:
1. columns filter
2. timestamp filter
3. delete tracker

Later to produce the scanner we are running a kv scanner with this filter and 
returns EmptyScanner or CoveredDeleteScanner basing on the result. 
In case if columns list is empty, the behavior of the filter's 
filterAllRemaining()  has been changed.
In 1.3 it returns true, in 1.4 it returns false and let FilterList to proceed 
to the next filter.  As the result instead of getting EmptyScanner we are 
building CoveredDeleteScanner. 

We may unify the case and in case of an empty list of columns we may explicitly 
return EmptyScanner. Easy patch, but want some confirmation that getting an 
empty column list in this method is by design and root cause is not somewhere 
else. 

FYI [~rajeshbabu], [~elserj], [~jamestaylor]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4366) Rebuilding a local index fails sometimes

2018-04-10 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432784#comment-16432784
 ] 

Sergey Soldatov commented on PHOENIX-4366:
--

[~samarthjain] there are may be 2 different scanners at the same moment. One 
has encodingScheme scheme and another one doesn't. So the second scan may 
override it and the first scanner will fail with the early mentioned exception. 
[~jamestaylor] changes look good. The concern I have is not related to this 
particular JIRA, but the whole idea that we rely on the client whether to use 
encoding columns make me worry. Recently I had a case when an app with some old 
version of the client was used to ingest the data and the result was a dataset 
with null values for non pk columns. Definitely, that's a topic for a separate 
JIRA, but looking at the current code I hardly can imagine how we can prevent 
that. 

> Rebuilding a local index fails sometimes
> 
>
> Key: PHOENIX-4366
> URL: https://issues.apache.org/jira/browse/PHOENIX-4366
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Marcin Januszkiewicz
>Assignee: James Taylor
>Priority: Blocker
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4366_v1.patch
>
>
> We have a table created in 4.12 with the new column encoding scheme and with 
> several local indexes. Sometimes when we issue an ALTER INDEX ... REBUILD 
> command, it fails with the following exception:
> {noformat}
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TRACES,\x01BY01O90A6-$599a349e,1509979836322.3f
> 30c9d449ed6c60a1cda6898f766bd0.: null 
>   
>   
> at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)  
>   
>
> at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)   
>   
>
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:255)
>   
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:284)
>
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2541)
>   
>  
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
> 
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
>   
>   
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>   
>   
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183) 
>   
>
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163) 
>   
>
> Caused by: java.lang.UnsupportedOperationException
>   
>   
> at 
> org.apache.phoenix.schema.PTable$QualifierEncodingScheme$1.decode(PTable.java:247)
>   
>
> at 
> org.apache.phoenix.schema.tuple.EncodedColumnQualiferCellsList.add(EncodedColumnQualiferCellsList.java:141)
> 
> at 
> org.apache.phoenix.schema.tuple.EncodedColumnQualiferCellsList.add(EncodedColumnQualiferCellsList.java:56)
>  
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:560) 
> 

[jira] [Commented] (PHOENIX-4672) Fix naming of QUERY_SERVER_KERBEROS_HTTP_PRINCIPAL_ATTRIB

2018-04-10 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432717#comment-16432717
 ] 

Sergey Soldatov commented on PHOENIX-4672:
--

LGTM +1.

> Fix naming of QUERY_SERVER_KERBEROS_HTTP_PRINCIPAL_ATTRIB
> -
>
> Key: PHOENIX-4672
> URL: https://issues.apache.org/jira/browse/PHOENIX-4672
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Trivial
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4672.001.patch, PHOENIX-4672.diff
>
>
> The HTTP-specific kerberos credentials implemented in PHOENIX-4533 introduce 
> some ambiguity: It is presently 
> {{phoenix.queryserver.kerberos.http.principal}}, but it should be 
> {{phoenix.queryserver.http.kerberos.principal}} to match the rest of Hadoop, 
> HBase, and Phoenix configuration kerberos principal properties.
> Need to update docs too.
> FYI [~lbronshtein]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4534) upsert/delete/upsert for the same row corrupts the indexes

2018-04-09 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431377#comment-16431377
 ] 

Sergey Soldatov commented on PHOENIX-4534:
--

Just confirmed that with v3 patch applied 
PartialIndexRebuilderIT#testWriteWhileRebuilding passed.

> upsert/delete/upsert for the same row corrupts the indexes
> --
>
> Key: PHOENIX-4534
> URL: https://issues.apache.org/jira/browse/PHOENIX-4534
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Romil Choksi
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
>  Labels: HBase-2.0
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4534.patch, PHOENIX-4534_v2.patch, 
> PHOENIX-4534_v3.patch
>
>
> If we delete and upsert again the same row, the corresponding index has a 
> null value. 
> {noformat}
> 0: jdbc:phoenix:> create table a (id integer primary key, f float);
> No rows affected (2.272 seconds)
> 0: jdbc:phoenix:> create index i1 on a (f);
> No rows affected (5.769 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.021 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> | 0.5  | 1|
> +--+--+
> 1 row selected (0.016 seconds)
> 0: jdbc:phoenix:> delete from a where id = 1;
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> +--+--+
> No rows selected (0.015 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +---+--+
> |  0:F  | :ID  |
> +---+--+
> | null  | 1|
> +---+--+
> 1 row selected (0.013 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4534) upsert/delete/upsert for the same row corrupts the indexes

2018-04-09 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4534:
-
Affects Version/s: 4.14.0

> upsert/delete/upsert for the same row corrupts the indexes
> --
>
> Key: PHOENIX-4534
> URL: https://issues.apache.org/jira/browse/PHOENIX-4534
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Romil Choksi
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
>  Labels: HBase-2.0
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4534.patch, PHOENIX-4534_v2.patch, 
> PHOENIX-4534_v3.patch
>
>
> If we delete and upsert again the same row, the corresponding index has a 
> null value. 
> {noformat}
> 0: jdbc:phoenix:> create table a (id integer primary key, f float);
> No rows affected (2.272 seconds)
> 0: jdbc:phoenix:> create index i1 on a (f);
> No rows affected (5.769 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.021 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> | 0.5  | 1|
> +--+--+
> 1 row selected (0.016 seconds)
> 0: jdbc:phoenix:> delete from a where id = 1;
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> +--+--+
> No rows selected (0.015 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +---+--+
> |  0:F  | :ID  |
> +---+--+
> | null  | 1|
> +---+--+
> 1 row selected (0.013 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4534) upsert/delete/upsert for the same row corrupts the indexes

2018-04-09 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4534:
-
Fix Version/s: 4.14.0

> upsert/delete/upsert for the same row corrupts the indexes
> --
>
> Key: PHOENIX-4534
> URL: https://issues.apache.org/jira/browse/PHOENIX-4534
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Romil Choksi
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
>  Labels: HBase-2.0
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4534.patch, PHOENIX-4534_v2.patch, 
> PHOENIX-4534_v3.patch
>
>
> If we delete and upsert again the same row, the corresponding index has a 
> null value. 
> {noformat}
> 0: jdbc:phoenix:> create table a (id integer primary key, f float);
> No rows affected (2.272 seconds)
> 0: jdbc:phoenix:> create index i1 on a (f);
> No rows affected (5.769 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.021 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> | 0.5  | 1|
> +--+--+
> 1 row selected (0.016 seconds)
> 0: jdbc:phoenix:> delete from a where id = 1;
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> +--+--+
> No rows selected (0.015 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +---+--+
> |  0:F  | :ID  |
> +---+--+
> | null  | 1|
> +---+--+
> 1 row selected (0.013 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4534) upsert/delete/upsert for the same row corrupts the indexes

2018-04-09 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431360#comment-16431360
 ] 

Sergey Soldatov commented on PHOENIX-4534:
--

Looks like those changes have been done in HBase 1.4 as well. And that's the 
reason why master branch has a number of failures with indexes related to 
upsert/delete/upsert row. [~elserj] that's one of the problems I mentioned 
earlier. 

> upsert/delete/upsert for the same row corrupts the indexes
> --
>
> Key: PHOENIX-4534
> URL: https://issues.apache.org/jira/browse/PHOENIX-4534
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: Romil Choksi
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4534.patch, PHOENIX-4534_v2.patch, 
> PHOENIX-4534_v3.patch
>
>
> If we delete and upsert again the same row, the corresponding index has a 
> null value. 
> {noformat}
> 0: jdbc:phoenix:> create table a (id integer primary key, f float);
> No rows affected (2.272 seconds)
> 0: jdbc:phoenix:> create index i1 on a (f);
> No rows affected (5.769 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.021 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> | 0.5  | 1|
> +--+--+
> 1 row selected (0.016 seconds)
> 0: jdbc:phoenix:> delete from a where id = 1;
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> +--+--+
> No rows selected (0.015 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +---+--+
> |  0:F  | :ID  |
> +---+--+
> | null  | 1|
> +---+--+
> 1 row selected (0.013 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4685) Parallel writes continuously to indexed table failing with OOME very quickly in 5.x branch

2018-04-09 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431344#comment-16431344
 ] 

Sergey Soldatov commented on PHOENIX-4685:
--

I believe that it's all related.  Actually I mentioned both. The last problem - 
when we open a region we create a separate connection to get admin, so we easy 
hit 60 max client connection to zk when, for example, create a table with 60+ 
salt buckets. And it may be the reason why we run out of threads - for each 
connection we also create a number of threads.  

> Parallel writes continuously to indexed table failing with OOME very quickly 
> in 5.x branch
> --
>
> Key: PHOENIX-4685
> URL: https://issues.apache.org/jira/browse/PHOENIX-4685
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4685_jstack
>
>
> Currently trying to write data to indexed table failing with OOME where 
> unable to create native threads. But it's working fine with 4.7.x branches. 
> Found many threads created for meta lookup and shared threads and no space to 
> create threads. This is happening even with short circuit writes enabled.
> {noformat}
> 2018-04-08 13:06:04,747 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=9,queue=0,port=16020] 
> index.PhoenixIndexFailurePolicy: handleFailure failed
> java.io.IOException: java.lang.reflect.UndeclaredThrowableException
> at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:185)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailureWithExceptions(PhoenixIndexFailurePolicy.java:217)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:143)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:144)
> at 
> org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:632)
> at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:607)
> at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:590)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1037)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1034)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1034)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:3533)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:3914)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3822)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1027)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:959)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:922)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2666)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42014)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1761)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:448)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:429)
> at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcce

[jira] [Commented] (PHOENIX-4682) UngroupedAggregateRegionObserver preCompactScannerOpen hook should not throw exceptions

2018-04-05 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16426616#comment-16426616
 ] 

Sergey Soldatov commented on PHOENIX-4682:
--

[~vincentpoon] could you please return the import of 
org.apache.phoenix.util.ByteUtil in MutableIndexIT.java. It breaks the build.

> UngroupedAggregateRegionObserver preCompactScannerOpen hook should not throw 
> exceptions
> ---
>
> Key: PHOENIX-4682
> URL: https://issues.apache.org/jira/browse/PHOENIX-4682
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-4682.master.v1.patch, 
> PHOENIX-4682.v2.0.98.patch, PHOENIX-4682.v2.master.patch
>
>
> TableNotFoundException in the preCompactScannerOpen hook can lead to RS abort.
> Some tables might have the phoenix coprocessor loaded but not be actual 
> Phoenix tables (i.e. have a row in SYSTEM.CATALOG).  We should ignore these 
> Exceptions instead of throwing them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-04-03 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16424598#comment-16424598
 ] 

Sergey Soldatov commented on PHOENIX-4669:
--

[~brfrn169] LGTM. Just could you please name the unit test in some reasonable 
way without bug number. 

> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4669-UT.patch, PHOENIX-4669-v2.patch, 
> PHOENIX-4669.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:

[jira] [Commented] (PHOENIX-2715) Query Log

2018-03-28 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418488#comment-16418488
 ] 

Sergey Soldatov commented on PHOENIX-2715:
--

[~an...@apache.org] At first glance it looks reasonable. But it would be nice 
to test it in real env. Could you please adopt to master branch? I tried it 
with HBase branch-2, but it seems that something is wrong with my env and HBase 
itself doesn't work properly ( looking into it whether it was caused by some 
recent commits or it's a problem on my side)

> Query Log
> -
>
> Key: PHOENIX-2715
> URL: https://issues.apache.org/jira/browse/PHOENIX-2715
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Nick Dimiduk
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-2715.patch
>
>
> One useful feature of other database systems is the query log. It allows the 
> DBA to review the queries run, who's run them, time taken, &c. This serves 
> both as an audit and also as a source of "ground truth" for performance 
> optimization. For instance, which columns should be indexed. It may also 
> serve as the foundation for automated performance recommendations/actions.
> What queries are being run is the first piece. Have this data tied into 
> tracing results and perhaps client-side metrics (PHOENIX-1819) becomes very 
> useful.
> This might take the form of clients writing data to a new system table, but 
> other implementation suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-03-28 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418385#comment-16418385
 ] 

Sergey Soldatov commented on PHOENIX-4669:
--

[~brfrn169] We should add all column families that the parent table has as well 
as the default '0' because if we have more than one index on view they all will 
be kept in the same physical table, so we have to create all column families. A 
simple test case to check:
{noformat}
create table a (i1 integer primary key, c2.i2 integer, c3.i2 integer, c4.i2 
integer);
create view v1 as select * from a where c2.i2 = 1;
upsert into v1 (i1, c3.i2, c4.i2 ) values (1,1,1);
create index i1 on v1 (c3.i2);
create index i2 on v1 (c3.i2) include (c4.i2);
upsert into v1 (i1, c3.i2, c4.i2 ) values (2,2,2);
{noformat}

> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4669-UT.patch, PHOENIX-4669.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
>

[jira] [Commented] (PHOENIX-4677) Commons-cli needs to be listed as dependency.

2018-03-28 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418104#comment-16418104
 ] 

Sergey Soldatov commented on PHOENIX-4677:
--

+1 LGTM

> Commons-cli needs to be listed as dependency.
> -
>
> Key: PHOENIX-4677
> URL: https://issues.apache.org/jira/browse/PHOENIX-4677
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Blocker
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4677.001.patch
>
>
> After HBase upgraded to hbase-thirdparty 2.1.0 via HBASE-20223, it shaded its 
> dependency on commons-cli. Phoenix has apparently be transitively using this 
> dependency without explicitly declaring it.
> We need to own the dependencies that we require.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4678) IndexScrutinyTool generates malformed query due to incorrect table name(s)

2018-03-28 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16418103#comment-16418103
 ] 

Sergey Soldatov commented on PHOENIX-4678:
--

Yeah, I see. Well, the fix makes sense. Also, we can remove check for 
schema.isEmpty() from the rest of the code ( I saw a couple such places).

> IndexScrutinyTool generates malformed query due to incorrect table name(s)
> --
>
> Key: PHOENIX-4678
> URL: https://issues.apache.org/jira/browse/PHOENIX-4678
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4678.diff
>
>
> {noformat}
> HADOOP_CLASSPATH="/usr/local/lib/hbase/conf:$(hbase mapredcp)" hadoop jar 
> /usr/local/lib/phoenix-5.0.0-SNAPSHOT/phoenix-5.0.0-SNAPSHOT-client.jar 
> org.apache.phoenix.mapreduce.index.IndexScrutinyTool -dt J -it 
> INDEX1{noformat}
> This ends up running queries like {{SELECT ... FROM .J}} and {{SELECT ... 
> FROM .INDEX1}}.
> This is because SchemaUtil.getQualifiedTableName is not properly handling an 
> empty schema name, only a null schema name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4678) IndexScrutinyTool generates malformed query due to incorrect table name(s)

2018-03-28 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417971#comment-16417971
 ] 

Sergey Soldatov commented on PHOENIX-4678:
--

That's weird. According to the source:
{noformat}
 final String schemaName = cmdLine.getOptionValue(SCHEMA_NAME_OPTION.getOpt());
{noformat}
getOptionValue would return null if no schema is provided in the command line:
{noformat}
   public String getOptionValue(String opt) {
String[] values = this.getOptionValues(opt);
return values == null ? null : values[0];
}
{noformat}

 How it happens that it's not null?


> IndexScrutinyTool generates malformed query due to incorrect table name(s)
> --
>
> Key: PHOENIX-4678
> URL: https://issues.apache.org/jira/browse/PHOENIX-4678
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4678.diff
>
>
> {noformat}
> HADOOP_CLASSPATH="/usr/local/lib/hbase/conf:$(hbase mapredcp)" hadoop jar 
> /usr/local/lib/phoenix-5.0.0-SNAPSHOT/phoenix-5.0.0-SNAPSHOT-client.jar 
> org.apache.phoenix.mapreduce.index.IndexScrutinyTool -dt J -it 
> INDEX1{noformat}
> This ends up running queries like {{SELECT ... FROM .J}} and {{SELECT ... 
> FROM .INDEX1}}.
> This is because SchemaUtil.getQualifiedTableName is not properly handling an 
> empty schema name, only a null schema name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-03-23 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4669:
-
Affects Version/s: 4.14.0

> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4669-UT.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:368)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1794)
>   at Case00162740.

[jira] [Commented] (PHOENIX-4661) Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: Table qualifier must not be empty"

2018-03-21 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16408657#comment-16408657
 ] 

Sergey Soldatov commented on PHOENIX-4661:
--

I have a strong feeling that changes in PhoenixAccessController.java are not 
related to this issue and need to be reverted. 

> Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: 
> Table qualifier must not be empty"
> 
>
> Key: PHOENIX-4661
> URL: https://issues.apache.org/jira/browse/PHOENIX-4661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4661.patch, PHOENIX-4661_v1.patch, 
> PHOENIX-4661_v2.patch
>
>
> Noticed this when trying run the python tests against a 5.0 install
> {code:java}
> > create table josh(pk varchar not null primary key);
> > drop table if exists josh;
> > drop table if exists josh;{code}
> We'd expect the first two commands to successfully execute, and the third to 
> do nothing. However, the third command fails:
> {code:java}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
>     at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8005)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2394)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2376)
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.IllegalArgumentException: Table qualifier must not be 
> empty
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:186)
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156)
>     at org.apache.hadoop.hbase.TableName.(TableName.java:346)
>     at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382)
>     at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:443)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1989)
>     ... 9 more
>     at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:122)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1301)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1264)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1515)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2877)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2804)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:1117)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1758)
>     at sqlline.Commands.execute(Commands.java:822)
>     at sqlline.Commands.sql(Commands.java:732)
>     at sqlline.SqlLine.dispatch(SqlLine.java:813)
>     at sqlline.SqlLine.begin(SqlLine.java:686)
>     at sqlline.SqlLine.start(SqlLine.java:398)
>     at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOExcepti

[jira] [Commented] (PHOENIX-4661) Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: Table qualifier must not be empty"

2018-03-20 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16406593#comment-16406593
 ] 

Sergey Soldatov commented on PHOENIX-4661:
--

[~an...@apache.org] yep, I was thinking about checking other calls of loadTable 
as well. +1 for your patch

> Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: 
> Table qualifier must not be empty"
> 
>
> Key: PHOENIX-4661
> URL: https://issues.apache.org/jira/browse/PHOENIX-4661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4661.patch, PHOENIX-4661_v1.patch
>
>
> Noticed this when trying run the python tests against a 5.0 install
> {code:java}
> > create table josh(pk varchar not null primary key);
> > drop table if exists josh;
> > drop table if exists josh;{code}
> We'd expect the first two commands to successfully execute, and the third to 
> do nothing. However, the third command fails:
> {code:java}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
>     at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8005)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2394)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2376)
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.IllegalArgumentException: Table qualifier must not be 
> empty
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:186)
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156)
>     at org.apache.hadoop.hbase.TableName.(TableName.java:346)
>     at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382)
>     at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:443)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1989)
>     ... 9 more
>     at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:122)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1301)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1264)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1515)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2877)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2804)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:1117)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1758)
>     at sqlline.Commands.execute(Commands.java:822)
>     at sqlline.Commands.sql(Commands.java:732)
>     at sqlline.SqlLine.dispatch(SqlLine.java:813)
>     at sqlline.SqlLine.begin(SqlLine.java:686)
>     at sqlline.SqlLine.start(SqlLine.java:398)
>     at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.c

[jira] [Updated] (PHOENIX-4661) Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: Table qualifier must not be empty"

2018-03-20 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4661:
-
Attachment: PHOENIX-4661.patch

> Repeatedly issuing DROP TABLE fails with "java.lang.IllegalArgumentException: 
> Table qualifier must not be empty"
> 
>
> Key: PHOENIX-4661
> URL: https://issues.apache.org/jira/browse/PHOENIX-4661
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4661.patch
>
>
> Noticed this when trying run the python tests against a 5.0 install
> {code:java}
> > create table josh(pk varchar not null primary key);
> > drop table if exists josh;
> > drop table if exists josh;{code}
> We'd expect the first two commands to successfully execute, and the third to 
> do nothing. However, the third command fails:
> {code:java}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
>     at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8005)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2394)
>     at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2376)
>     at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>     at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.IllegalArgumentException: Table qualifier must not be 
> empty
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:186)
>     at 
> org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156)
>     at org.apache.hadoop.hbase.TableName.(TableName.java:346)
>     at 
> org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382)
>     at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:443)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1989)
>     ... 9 more
>     at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:122)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1301)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1264)
>     at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.dropTable(ConnectionQueryServicesImpl.java:1515)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2877)
>     at 
> org.apache.phoenix.schema.MetaDataClient.dropTable(MetaDataClient.java:2804)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropTableStatement$1.execute(PhoenixStatement.java:1117)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>     at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1758)
>     at sqlline.Commands.execute(Commands.java:822)
>     at sqlline.Commands.sql(Commands.java:732)
>     at sqlline.SqlLine.dispatch(SqlLine.java:813)
>     at sqlline.SqlLine.begin(SqlLine.java:686)
>     at sqlline.SqlLine.start(SqlLine.java:398)
>     at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: JOSH: Table qualifier must not 
> be empty
>     at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>     at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:2034)
>     at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataServ

[jira] [Commented] (PHOENIX-4620) Fix compilation issues with 2.0.0-beta-2

2018-03-15 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16400765#comment-16400765
 ] 

Sergey Soldatov commented on PHOENIX-4620:
--

+1 LGTM

> Fix compilation issues with 2.0.0-beta-2
> 
>
> Key: PHOENIX-4620
> URL: https://issues.apache.org/jira/browse/PHOENIX-4620
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Josh Elser
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4620.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4576) Fix LocalIndexSplitMergeIT tests failing in master branch

2018-03-13 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16397658#comment-16397658
 ] 

Sergey Soldatov commented on PHOENIX-4576:
--

Looks good to me. 

> Fix LocalIndexSplitMergeIT tests failing in master branch
> -
>
> Key: PHOENIX-4576
> URL: https://issues.apache.org/jira/browse/PHOENIX-4576
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4576.patch, PHOENIX-4576_v2.patch
>
>
> Currenty LocalIndexSplitMergeIT#testLocalIndexScanAfterRegionsMerge is 
> failing in master branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4646) The data exceeds the max capacity for the data type error for valid scenarios.

2018-03-12 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16396568#comment-16396568
 ] 

Sergey Soldatov commented on PHOENIX-4646:
--

[~jamestaylor] Well, that's getting interesting since we have to support 
assigning also char to varchar and varchar to char. The postgresql thread from 
PHOENIX-1145 is quite interesting (just in case - the new link to it is 
https://www.postgresql.org/message-id/A737B7A37273E048B164557ADEF4A58B0579A7AB%40ntex2010a.host.magwien.gv.at
 ) and we need to define how we threat all combinations (char/varchar/string 
constant) in terms of trailing spaces. I think that at the moment we can fix 
this particular case because it's not about trailing characters, but about the 
real length of the value and resolve the rest as part of PHOENIX-1145 (so to 
decide whether we always trim strings or make it configurable). WDYT?

> The data exceeds the max capacity for the data type error for valid scenarios.
> --
>
> Key: PHOENIX-4646
> URL: https://issues.apache.org/jira/browse/PHOENIX-4646
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4646.patch
>
>
> Here is an example:
> {noformat}
> create table test_trim_source(name varchar(160) primary key, id varchar(120), 
> address varchar(160)); 
> create table test_trim_target(name varchar(160) primary key, id varchar(10), 
> address 
>  varchar(10));
> upsert into test_trim_source values('test','test','test');
> upsert into test_trim_target select * from test_trim_source;
> {noformat}
> It fails with 
> {noformat}
> Error: ERROR 206 (22003): The data exceeds the max capacity for the data 
> type. value='test' columnName=ID (state=22003,code=206)
> java.sql.SQLException: ERROR 206 (22003): The data exceeds the max capacity 
> for the data type. value='test' columnName=ID
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:165)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:149)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:116)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1261)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1203)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$ClientUpsertSelectMutationPlan.execute(UpsertCompiler.java:1300)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:368)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1794)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:813)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: java.sql.SQLException: ERROR 206 (22003): The data exceeds the max 
> capacity for the data type. value='test' columnName=ID
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.upsertSelect(UpsertCompiler.java:235)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertingParallelIteratorFactory.mutate(UpsertCompiler.java:284)
>   at 
> org.apache.phoenix.compile.MutatingParallelIteratorFactory.newIterator(MutatingParallelIteratorFactory.java:59)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:113)
>   at java.util.

[jira] [Commented] (PHOENIX-4646) The data exceeds the max capacity for the data type error for valid scenarios.

2018-03-08 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16391921#comment-16391921
 ] 

Sergey Soldatov commented on PHOENIX-4646:
--

[~jamestaylor] sure, actually that's wip patch and I'm going to include tests 
as well. For CHAR this is not applicable since it always has the fixed length. 
For DECIMAL isSizeCompatible checks for length and scale looks good, but I will 
double check with real scenarios.  

> The data exceeds the max capacity for the data type error for valid scenarios.
> --
>
> Key: PHOENIX-4646
> URL: https://issues.apache.org/jira/browse/PHOENIX-4646
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4646.patch
>
>
> Here is an example:
> {noformat}
> create table test_trim_source(name varchar(160) primary key, id varchar(120), 
> address varchar(160)); 
> create table test_trim_target(name varchar(160) primary key, id varchar(10), 
> address 
>  varchar(10));
> upsert into test_trim_source values('test','test','test');
> upsert into test_trim_target select * from test_trim_source;
> {noformat}
> It fails with 
> {noformat}
> Error: ERROR 206 (22003): The data exceeds the max capacity for the data 
> type. value='test' columnName=ID (state=22003,code=206)
> java.sql.SQLException: ERROR 206 (22003): The data exceeds the max capacity 
> for the data type. value='test' columnName=ID
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:165)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:149)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:116)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1261)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1203)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$ClientUpsertSelectMutationPlan.execute(UpsertCompiler.java:1300)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:368)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1794)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:813)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: java.sql.SQLException: ERROR 206 (22003): The data exceeds the max 
> capacity for the data type. value='test' columnName=ID
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.upsertSelect(UpsertCompiler.java:235)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertingParallelIteratorFactory.mutate(UpsertCompiler.java:284)
>   at 
> org.apache.phoenix.compile.MutatingParallelIteratorFactory.newIterator(MutatingParallelIteratorFactory.java:59)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:113)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat} 
> The problem is that in PVarchar.isSizeCompatib

[jira] [Updated] (PHOENIX-4646) The data exceeds the max capacity for the data type error for valid scenarios.

2018-03-08 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4646:
-
Attachment: PHOENIX-4646.patch

> The data exceeds the max capacity for the data type error for valid scenarios.
> --
>
> Key: PHOENIX-4646
> URL: https://issues.apache.org/jira/browse/PHOENIX-4646
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4646.patch
>
>
> Here is an example:
> {noformat}
> create table test_trim_source(name varchar(160) primary key, id varchar(120), 
> address varchar(160)); 
> create table test_trim_target(name varchar(160) primary key, id varchar(10), 
> address 
>  varchar(10));
> upsert into test_trim_source values('test','test','test');
> upsert into test_trim_target select * from test_trim_source;
> {noformat}
> It fails with 
> {noformat}
> Error: ERROR 206 (22003): The data exceeds the max capacity for the data 
> type. value='test' columnName=ID (state=22003,code=206)
> java.sql.SQLException: ERROR 206 (22003): The data exceeds the max capacity 
> for the data type. value='test' columnName=ID
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:165)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:149)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:116)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1261)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1203)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$ClientUpsertSelectMutationPlan.execute(UpsertCompiler.java:1300)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:368)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1794)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:813)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: java.sql.SQLException: ERROR 206 (22003): The data exceeds the max 
> capacity for the data type. value='test' columnName=ID
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.upsertSelect(UpsertCompiler.java:235)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertingParallelIteratorFactory.mutate(UpsertCompiler.java:284)
>   at 
> org.apache.phoenix.compile.MutatingParallelIteratorFactory.newIterator(MutatingParallelIteratorFactory.java:59)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:113)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat} 
> The problem is that in PVarchar.isSizeCompatible we ignore the length of the 
> value if the source has specified max size for the value. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4646) The data exceeds the max capacity for the data type error for valid scenarios.

2018-03-08 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created PHOENIX-4646:


 Summary: The data exceeds the max capacity for the data type error 
for valid scenarios.
 Key: PHOENIX-4646
 URL: https://issues.apache.org/jira/browse/PHOENIX-4646
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0
Reporter: Sergey Soldatov
Assignee: Sergey Soldatov
 Fix For: 4.14.0


Here is an example:
{noformat}
create table test_trim_source(name varchar(160) primary key, id varchar(120), 
address varchar(160)); 
create table test_trim_target(name varchar(160) primary key, id varchar(10), 
address 
 varchar(10));
upsert into test_trim_source values('test','test','test');
upsert into test_trim_target select * from test_trim_source;
{noformat}
It fails with 
{noformat}
Error: ERROR 206 (22003): The data exceeds the max capacity for the data type. 
value='test' columnName=ID (state=22003,code=206)
java.sql.SQLException: ERROR 206 (22003): The data exceeds the max capacity for 
the data type. value='test' columnName=ID
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:165)
at 
org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:149)
at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:116)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1261)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1203)
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
at 
org.apache.phoenix.compile.UpsertCompiler$ClientUpsertSelectMutationPlan.execute(UpsertCompiler.java:1300)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:368)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1794)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
Caused by: java.sql.SQLException: ERROR 206 (22003): The data exceeds the max 
capacity for the data type. value='test' columnName=ID
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.compile.UpsertCompiler.upsertSelect(UpsertCompiler.java:235)
at 
org.apache.phoenix.compile.UpsertCompiler$UpsertingParallelIteratorFactory.mutate(UpsertCompiler.java:284)
at 
org.apache.phoenix.compile.MutatingParallelIteratorFactory.newIterator(MutatingParallelIteratorFactory.java:59)
at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121)
at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:113)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat} 

The problem is that in PVarchar.isSizeCompatible we ignore the length of the 
value if the source has specified max size for the value. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4645) PhoenixStorageHandler doesn't handle correctly data/timestamp in push down predicate when engine is tez.

2018-03-07 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4645:
-
Attachment: PHOENIX-4645-wip.patch

> PhoenixStorageHandler doesn't handle correctly data/timestamp in push down 
> predicate when engine is tez. 
> -
>
> Key: PHOENIX-4645
> URL: https://issues.apache.org/jira/browse/PHOENIX-4645
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
>  Labels: HivePhoenix
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4645-wip.patch
>
>
> DDLs:
> {noformat}
> CREATE TABLE TEST_PHOENIX
> (
> PART_ID BIGINT NOT NULL,
> COMMIT_TIMESTAMP TIMESTAMP,
> CONSTRAINT pk PRIMARY KEY (PART_ID)
> )
> SALT_BUCKETS=9;
> CREATE EXTERNAL TABLE TEST_HIVE
> (
> PART_ID BIGINT,
> SOURCEDB_COMMIT_TIMESTAMP TIMESTAMP
> )
> STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
> TBLPROPERTIES
> (
> "phoenix.table.name" = "TEST_PHOENIX",
> "phoenix.zookeeper.quorum" = "localhost",
> "phoenix.zookeeper.znode.parent" = "/hbase",
> "phoenix.zookeeper.client.port" = "2181",
> "phoenix.rowkeys" = "PART_ID",
> "phoenix.column.mapping" = 
> "part_id:PART_ID,sourcedb_commit_timestamp:COMMIT_TIMESTAMP"
> );
> {noformat}
> Query :
> {noformat}
> hive> select * from TEST_HIVE2 where sourcedb_commit_timestamp between 
> '2018-03-01 01:00:00.000' and  '2018-03-20 01:00:00.000';
> OK
> Failed with exception java.io.IOException:java.lang.RuntimeException: 
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. TIMESTAMP and VARCHAR for "sourcedb_commit_timestamp" >= 
> '2018-03-01 01:00:00.000'
> {noformat}
> That happens because we don't use mapped column name when we check whether we 
> need to apply to_timestamp/to_date function. For the default mapping, we 
> regexp patterns don't take into account that column name is double quoted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4645) PhoenixStorageHandler doesn't handle correctly data/timestamp in push down predicate when engine is tez.

2018-03-07 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created PHOENIX-4645:


 Summary: PhoenixStorageHandler doesn't handle correctly 
data/timestamp in push down predicate when engine is tez. 
 Key: PHOENIX-4645
 URL: https://issues.apache.org/jira/browse/PHOENIX-4645
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0
Reporter: Sergey Soldatov
Assignee: Sergey Soldatov
 Fix For: 4.14.0


DDLs:
{noformat}
CREATE TABLE TEST_PHOENIX
(
PART_ID BIGINT NOT NULL,
COMMIT_TIMESTAMP TIMESTAMP,
CONSTRAINT pk PRIMARY KEY (PART_ID)
)
SALT_BUCKETS=9;
CREATE EXTERNAL TABLE TEST_HIVE
(
PART_ID BIGINT,
SOURCEDB_COMMIT_TIMESTAMP TIMESTAMP
)
STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
TBLPROPERTIES
(
"phoenix.table.name" = "TEST_PHOENIX",
"phoenix.zookeeper.quorum" = "localhost",
"phoenix.zookeeper.znode.parent" = "/hbase",
"phoenix.zookeeper.client.port" = "2181",
"phoenix.rowkeys" = "PART_ID",
"phoenix.column.mapping" = 
"part_id:PART_ID,sourcedb_commit_timestamp:COMMIT_TIMESTAMP"
);

{noformat}
Query :
{noformat}
hive> select * from TEST_HIVE2 where sourcedb_commit_timestamp between 
'2018-03-01 01:00:00.000' and  '2018-03-20 01:00:00.000';
OK
Failed with exception java.io.IOException:java.lang.RuntimeException: 
org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
mismatch. TIMESTAMP and VARCHAR for "sourcedb_commit_timestamp" >= '2018-03-01 
01:00:00.000'
{noformat}

That happens because we don't use mapped column name when we check whether we 
need to apply to_timestamp/to_date function. For the default mapping, we regexp 
patterns don't take into account that column name is double quoted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4608) Concurrent modification of bitset in ProjectedColumnExpression

2018-02-15 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16365218#comment-16365218
 ] 

Sergey Soldatov commented on PHOENIX-4608:
--

[~jamestaylor] That was upsert select and it might be the clone of 
PHOENIX-4588. Will recheck tomorrow with the recent master for sure. 

> Concurrent modification of bitset in ProjectedColumnExpression
> --
>
> Key: PHOENIX-4608
> URL: https://issues.apache.org/jira/browse/PHOENIX-4608
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4608-v1.patch
>
>
> in ProjectedColumnExpression we are using an instance of ValueBitSet to track 
> nulls during evaluate calls. We are using a single instance of 
> ProjectedColumnExpression per column across all threads running in parallel, 
> so it may happen that one thread call bitSet.clear() and another thread is 
> using it in isNull at the same time, making a wrong assumption that the value 
> is null.  We saw that problem when query like 
> {noformat}
> upsert into C select trim (A.ID), B.B From (select ID, SUM(1) as B from T1 
> group by ID) as B join T2 as A on A.ID = B.ID;  
> {noformat}
> During the execution earlier mentioned condition happen and we don't advance 
> from the char column (A.ID)  to long (B.B) and get an exception like
> {noformat}
> Error: ERROR 201 (22000): Illegal data. BIGINT value -6908486506036322272 
> cannot be cast to Integer without changing its value (state=22000,code=201) 
> java.sql.SQLException: ERROR 201 (22000): Illegal data. BIGINT value 
> -6908486506036322272 cannot be cast to Integer without changing its value 
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:442)
>  
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>  
> at 
> org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:129) 
> at 
> org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:118)
>  
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:771)
>  
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:714)
>  
> at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
>  
> at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
>  
> at 
> org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
>  
> at 
> org.apache.phoenix.compile.UpsertCompiler$2.execute(UpsertCompiler.java:797) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:343) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:329)
>  
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1440) 
> at sqlline.Commands.execute(Commands.java:822) 
> at sqlline.Commands.sql(Commands.java:732) 
> at sqlline.SqlLine.dispatch(SqlLine.java:808) 
> at sqlline.SqlLine.begin(SqlLine.java:681) 
> at sqlline.SqlLine.start(SqlLine.java:398) 
> at sqlline.SqlLine.main(SqlLine.java:292)
> {noformat}
> Fortunately, bitSet is the only field we continuously modify in that class, 
> so we may fix this problem by making it ThreadLocal. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4608) Concurrent modification of bitset in ProjectedColumnExpression

2018-02-14 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4608:
-
Description: 
in ProjectedColumnExpression we are using an instance of ValueBitSet to track 
nulls during evaluate calls. We are using a single instance of 
ProjectedColumnExpression per column across all threads running in parallel, so 
it may happen that one thread call bitSet.clear() and another thread is using 
it in isNull at the same time, making a wrong assumption that the value is 
null.  We saw that problem when query like 
{noformat}
upsert into C select trim (A.ID), B.B From (select ID, SUM(1) as B from T1 
group by ID) as B join T2 as A on A.ID = B.ID;  
{noformat}

During the execution earlier mentioned condition happen and we don't advance 
from the char column (A.ID)  to long (B.B) and get an exception like
{noformat}
Error: ERROR 201 (22000): Illegal data. BIGINT value -6908486506036322272 
cannot be cast to Integer without changing its value (state=22000,code=201) 
java.sql.SQLException: ERROR 201 (22000): Illegal data. BIGINT value 
-6908486506036322272 cannot be cast to Integer without changing its value 
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:442)
 
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
 
at org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:129) 
at 
org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:118)
 
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107) 
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:771)
 
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:714)
 
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
 
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
 
at 
org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
 
at org.apache.phoenix.compile.UpsertCompiler$2.execute(UpsertCompiler.java:797) 
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:343) 
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331) 
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:329)
 
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1440) 
at sqlline.Commands.execute(Commands.java:822) 
at sqlline.Commands.sql(Commands.java:732) 
at sqlline.SqlLine.dispatch(SqlLine.java:808) 
at sqlline.SqlLine.begin(SqlLine.java:681) 
at sqlline.SqlLine.start(SqlLine.java:398) 
at sqlline.SqlLine.main(SqlLine.java:292)
{noformat}

Fortunately, bitSet is the only field we continuously modify in that class, so 
we may fix this problem by making it ThreadLocal. 

  was:
in ProjectedColumnExpression we are using an instance of ValueBitSet to track 
nulls during evaluate calls. We are using a single instance of 
ProjectedColumnExpression per column across all threads running in parallel, so 
it may happen that one thread call bitSet.clear() and another thread is using 
it in isNull at the same time, making a wrong assumption that the value is 
null.  We saw that problem when query like 
{noformat}
upsert into C select trim (A.ID), B.B From (select ID, SUM(1) as B from T1 
group by ID) as B join T2 as A on A.ID = B.ID;  
{noformat}

During the execution earlier mentioned condition happen and we don't advance 
from the char column (A.ID)  to int (B.B) and get an exception like
{noformat}
Error: ERROR 201 (22000): Illegal data. BIGINT value -6908486506036322272 
cannot be cast to Integer without changing its value (state=22000,code=201) 
java.sql.SQLException: ERROR 201 (22000): Illegal data. BIGINT value 
-6908486506036322272 cannot be cast to Integer without changing its value 
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:442)
 
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
 
at org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:129) 
at 
org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:118)
 
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107) 
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:771)
 
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:714)
 
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
 
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
 
at 
org.apache.phoenix.iterate.DelegateResultIterator.

[jira] [Updated] (PHOENIX-4608) Concurrent modification of bitset in ProjectedColumnExpression

2018-02-14 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4608:
-
Attachment: PHOENIX-4608-v1.patch

> Concurrent modification of bitset in ProjectedColumnExpression
> --
>
> Key: PHOENIX-4608
> URL: https://issues.apache.org/jira/browse/PHOENIX-4608
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4608-v1.patch
>
>
> in ProjectedColumnExpression we are using an instance of ValueBitSet to track 
> nulls during evaluate calls. We are using a single instance of 
> ProjectedColumnExpression per column across all threads running in parallel, 
> so it may happen that one thread call bitSet.clear() and another thread is 
> using it in isNull at the same time, making a wrong assumption that the value 
> is null.  We saw that problem when query like 
> {noformat}
> upsert into C select trim (A.ID), B.B From (select ID, SUM(1) as B from T1 
> group by ID) as B join T2 as A on A.ID = B.ID;  
> {noformat}
> During the execution earlier mentioned condition happen and we don't advance 
> from the char column (A.ID)  to int (B.B) and get an exception like
> {noformat}
> Error: ERROR 201 (22000): Illegal data. BIGINT value -6908486506036322272 
> cannot be cast to Integer without changing its value (state=22000,code=201) 
> java.sql.SQLException: ERROR 201 (22000): Illegal data. BIGINT value 
> -6908486506036322272 cannot be cast to Integer without changing its value 
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:442)
>  
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>  
> at 
> org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:129) 
> at 
> org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:118)
>  
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:771)
>  
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:714)
>  
> at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
>  
> at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
>  
> at 
> org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
>  
> at 
> org.apache.phoenix.compile.UpsertCompiler$2.execute(UpsertCompiler.java:797) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:343) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:329)
>  
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1440) 
> at sqlline.Commands.execute(Commands.java:822) 
> at sqlline.Commands.sql(Commands.java:732) 
> at sqlline.SqlLine.dispatch(SqlLine.java:808) 
> at sqlline.SqlLine.begin(SqlLine.java:681) 
> at sqlline.SqlLine.start(SqlLine.java:398) 
> at sqlline.SqlLine.main(SqlLine.java:292)
> {noformat}
> Fortunately, bitSet is the only field we continuously modify in that class, 
> so we may fix this problem by making it ThreadLocal. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4608) Concurrent modification of bitset in ProjectedColumnExpression

2018-02-14 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created PHOENIX-4608:


 Summary: Concurrent modification of bitset in 
ProjectedColumnExpression
 Key: PHOENIX-4608
 URL: https://issues.apache.org/jira/browse/PHOENIX-4608
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0
Reporter: Sergey Soldatov
Assignee: Sergey Soldatov
 Fix For: 4.14.0


in ProjectedColumnExpression we are using an instance of ValueBitSet to track 
nulls during evaluate calls. We are using a single instance of 
ProjectedColumnExpression per column across all threads running in parallel, so 
it may happen that one thread call bitSet.clear() and another thread is using 
it in isNull at the same time, making a wrong assumption that the value is 
null.  We saw that problem when query like 
{noformat}
upsert into C select trim (A.ID), B.B From (select ID, SUM(1) as B from T1 
group by ID) as B join T2 as A on A.ID = B.ID;  
{noformat}

During the execution earlier mentioned condition happen and we don't advance 
from the char column (A.ID)  to int (B.B) and get an exception like
{noformat}
Error: ERROR 201 (22000): Illegal data. BIGINT value -6908486506036322272 
cannot be cast to Integer without changing its value (state=22000,code=201) 
java.sql.SQLException: ERROR 201 (22000): Illegal data. BIGINT value 
-6908486506036322272 cannot be cast to Integer without changing its value 
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:442)
 
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
 
at org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:129) 
at 
org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:118)
 
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107) 
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:771)
 
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:714)
 
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
 
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
 
at 
org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
 
at org.apache.phoenix.compile.UpsertCompiler$2.execute(UpsertCompiler.java:797) 
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:343) 
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331) 
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:329)
 
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1440) 
at sqlline.Commands.execute(Commands.java:822) 
at sqlline.Commands.sql(Commands.java:732) 
at sqlline.SqlLine.dispatch(SqlLine.java:808) 
at sqlline.SqlLine.begin(SqlLine.java:681) 
at sqlline.SqlLine.start(SqlLine.java:398) 
at sqlline.SqlLine.main(SqlLine.java:292)
{noformat}

Fortunately, bitSet is the only field we continuously modify in that class, so 
we may fix this problem by making it ThreadLocal. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4526) PhoenixStorageHandler doesn't work with upper case in phoenix.rowkeys

2018-02-14 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363640#comment-16363640
 ] 

Sergey Soldatov commented on PHOENIX-4526:
--

phoenix.rowkeys specifies *Hive* columns. all Hive columns are lowercase and 
should be specified in that way.

> PhoenixStorageHandler doesn't work with upper case in phoenix.rowkeys
> -
>
> Key: PHOENIX-4526
> URL: https://issues.apache.org/jira/browse/PHOENIX-4526
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Choi JaeHwan
>Priority: Major
>  Labels: HivePhoenix
> Attachments: PHOENIX-4526.patch
>
>
> If you write phoenix rowkey in uppercase, you will get the following error.
> The field column name changes from hive to lowercase, but not to the 
> phoenix.rowkeys property.
> {code}
> CREATE TABLE `PROFILE_PHOENIX_CLONE4` (
>   USER_ID STRING COMMENT 'from deserializer'
>   ,MARRIED STRING COMMENT 'from deserializer'
>   ,USER_NAME STRING COMMENT 'from deserializer'
>   ,BIRTH STRING COMMENT 'from deserializer'
>   ,WEIGHT FLOAT COMMENT 'from deserializer'
>   ,HEIGHT DOUBLE COMMENT 'from deserializer'
>   ,CHILD STRING COMMENT 'from deserializer'
>   ,IS_MALE BOOLEAN COMMENT 'from deserializer'
>   ,PHONE STRING COMMENT 'from deserializer'
>   ,EMAIL STRING COMMENT 'from deserializer'
>   ,CREATE_TIME TIMESTAMP COMMENT 'from deserializer'
> ) COMMENT '한글 HBase 테이블'
> STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler' 
> TBLPROPERTIES (
>   "phoenix.table.name"="jackdb_PROFILE_PHOENIX_CLONE4"
>   ,"phoenix.zookeeper.quorum"="qa3.nexr.com,qa4.nexr.com,qa5.nexr.com"
>   ,"phoenix.rowkeys"="USER_ID,MARRIED"
>   ,"phoenix.zookeeper.client.port"="2181"
>   ,"phoenix.zookeeper.znode.parent"="/hbase"
>   
> ,"phoenix.column.mapping"="USER_ID:USER_ID,MARRIED:MARRIED,USER_NAME:USER_NAME,BIRTH:BIRTH,WEIGHT:WEIGHT,HEIGHT:HEIGHT,CHILD:CHILD,IS_MALE:IS_MALE,PHONE:PHONE,EMAIL:EMAIL,CREATE_TIME:CREATE_TIME"
>   ,"ndap.table.storageType"="PHOENIX"
>   ,"phoenix.table.options"="SALT_BUCKETS=10,DATA_BLOCK_ENCODING='DIFF'"
> )
> {code}
> {code}
> 2018-01-04T10:37:50,186 INFO  [HiveServer2-Background-Pool: Thread-10310]: 
> ql.Driver (Driver.java:execute(1735)) - Executing 
> command(queryId=hive_20180104103750_424baf0b-141a-450c-ae78-8f9be8a743a8): 
> CREATE TABLE `jackdb`.`PROFILE_PHOENIX_CLONE4` (
>   USER_ID STRING COMMENT 'from deserializer'
>   ,MARRIED STRING COMMENT 'from deserializer'
>   ,USER_NAME STRING COMMENT 'from deserializer'
>   ,BIRTH STRING COMMENT 'from deserializer'
>   ,WEIGHT FLOAT COMMENT 'from deserializer'
>   ,HEIGHT DOUBLE COMMENT 'from deserializer'
>   ,CHILD STRING COMMENT 'from deserializer'
>   ,IS_MALE BOOLEAN COMMENT 'from deserializer'
>   ,PHONE STRING COMMENT 'from deserializer'
>   ,EMAIL STRING COMMENT 'from deserializer'
>   ,CREATE_TIME TIMESTAMP COMMENT 'from deserializer'
> ) COMMENT '한글 HBase 테이블'
> STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler' 
> TBLPROPERTIES (
>   "phoenix.table.name"="jackdb_PROFILE_PHOENIX_CLONE4"
>   ,"phoenix.zookeeper.quorum"="qa3.nexr.com,qa4.nexr.com,qa5.nexr.com"
>   ,"phoenix.rowkeys"="USER_ID,MARRIED"
>   ,"phoenix.zookeeper.client.port"="2181"
>   ,"phoenix.zookeeper.znode.parent"="/hbase"
>   
> ,"phoenix.column.mapping"="USER_ID:USER_ID,MARRIED:MARRIED,USER_NAME:USER_NAME,BIRTH:BIRTH,WEIGHT:WEIGHT,HEIGHT:HEIGHT,CHILD:CHILD,IS_MALE:IS_MALE,PHONE:PHONE,EMAIL:EMAIL,CREATE_TIME:CREATE_TIME"
>   ,"ndap.table.storageType"="PHOENIX"
>   ,"phoenix.table.options"="SALT_BUCKETS=10,DATA_BLOCK_ENCODING='DIFF'"
> )
> 2018-01-04T10:37:50,189 INFO  [HiveServer2-Background-Pool: Thread-10310]: 
> ql.Driver (Driver.java:launchTask(2181)) - Starting task [Stage-0:DDL] in 
> serial mode
> 2018-01-04T10:37:50,224 INFO  [HiveServer2-Background-Pool: Thread-10310]: 
> plan.CreateTableDesc (CreateTableDesc.java:toTable(717)) - Use 
> StorageHandler-supplied org.apache.phoenix.hive.PhoenixSerDe for table 
> PROFILE_PHOENIX_CLONE4
> 2018-01-04T10:37:50,225 INFO  [HiveServer2-Background-Pool: Thread-10310]: 
> exec.DDLTask (DDLTask.java:createTable(4324)) - creating table 
> jackdb.PROFILE_PHOENIX_CLONE4 on null
> 2018-01-04T10:37:50,294 ERROR [HiveServer2-Background-Pool: Thread-10310]: 
> exec.DDLTask (DDLTask.java:failed(639)) - 
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
>   at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:862)
>   at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:867)
>   at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.j

[jira] [Commented] (PHOENIX-4423) Phoenix-hive compilation broken on >=Hive 2.3

2018-02-13 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363038#comment-16363038
 ] 

Sergey Soldatov commented on PHOENIX-4423:
--

Ah, hive-it is not published as an official artifact. 
https://repository.apache.org/content/repositories/releases/org/apache/hive/
I believe that was the main reason why we used our own clone of test util class.

> Phoenix-hive compilation broken on >=Hive 2.3
> -
>
> Key: PHOENIX-4423
> URL: https://issues.apache.org/jira/browse/PHOENIX-4423
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4423.002.patch, PHOENIX-4423_wip1.patch
>
>
> HIVE-15167 removed an interface which we're using in Phoenix which obviously 
> fails compilation. Will need to figure out how to work with Hive 1.x, <2.3.0, 
> and >=2.3.0.
> FYI [~sergey.soldatov]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4423) Phoenix-hive compilation broken on >=Hive 2.3

2018-02-13 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362892#comment-16362892
 ] 

Sergey Soldatov commented on PHOENIX-4423:
--

Heh. There were some 'improvements' in HiveTestUtils comparing to the default 
hive-it runner to get it working in our case for several MR/Tez jobs in the 
query (and joins are the place where we are using it). Let me check it. 

> Phoenix-hive compilation broken on >=Hive 2.3
> -
>
> Key: PHOENIX-4423
> URL: https://issues.apache.org/jira/browse/PHOENIX-4423
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4423.002.patch, PHOENIX-4423_wip1.patch
>
>
> HIVE-15167 removed an interface which we're using in Phoenix which obviously 
> fails compilation. Will need to figure out how to work with Hive 1.x, <2.3.0, 
> and >=2.3.0.
> FYI [~sergey.soldatov]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4586) UPSERT SELECT doesn't take in account comparison operators for subqueries.

2018-02-07 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356346#comment-16356346
 ] 

Sergey Soldatov commented on PHOENIX-4586:
--

+ from me as well. Great work, [~maryannxue]

> UPSERT SELECT doesn't take in account comparison operators for subqueries.
> --
>
> Key: PHOENIX-4586
> URL: https://issues.apache.org/jira/browse/PHOENIX-4586
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: Maryann Xue
>Priority: Critical
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4586.patch
>
>
> If upsert select has a where condition that is using any comparison operator 
> (including ANY/SOME/etc), the whole WHERE clause just ignored. Table:
> {noformat}
> create table T (id integer primary key, i1 integer);
> upsert into T values (1,1);
> upsert into T values (2,2);
> {noformat}
> Query that should not upsert anything because we have a condition in where 
> that I1 should be greater than any value we already have as well as not 
> existing ID:
> {noformat}
> 0: jdbc:phoenix:> upsert into T select id, 4 from T where id = 3 AND i1 > 
> (select i1 from T);
> 2 rows affected (0.02 seconds)
> 0: jdbc:phoenix:> select * from T;
> +-+-+
> | ID  | I1  |
> +-+-+
> | 1   | 4   |
> | 2   | 4   |
> +-+-+
> 2 rows selected (0.014 seconds)
> {noformat}
> Now with ANY.  Should not upsert anything as well because ID is [1,2], while 
> I1 are all '4':
> {noformat}
> 0: jdbc:phoenix:> upsert into T select id, 5 from T where id = 2 AND i1 = ANY 
> (select ID from T);
> 2 rows affected (0.016 seconds)
> 0: jdbc:phoenix:> select * from T;
> +-+-+
> | ID  | I1  |
> +-+-+
> | 1   | 5   |
> | 2   | 5   |
> +-+-+
> 2 rows selected (0.013 seconds)
> {noformat}
> A similar query with IN works just fine:
> {noformat}
> 0: jdbc:phoenix:> upsert into T select id, 6 from T where id = 2 AND i1 IN 
> (select ID from T);
> No rows affected (0.094 seconds)
> 0: jdbc:phoenix:> select * from T;
> +-+-+
> | ID  | I1  |
> +-+-+
> | 1   | 5   |
> | 2   | 5   |
> +-+-+
> 2 rows selected (0.014 seconds)
> {noformat}
> The reason for this behavior is that for IN we convert subselect to semi-join 
> and execute upsert on the client side.  For comparisons, we don't perform any 
> transformations and query is considered flat and finally executed on the 
> server side.  Not sure why, but we also completely ignore the second 
> condition in WHERE clause as well and that may lead to a serious data loss. 
> [~jamestaylor], [~maryannxue] any thoughts or suggestions how to fix that are 
> really appreciated. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4586) UPSERT SELECT doesn't take in account comparison operators for subqueries.

2018-02-07 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356184#comment-16356184
 ] 

Sergey Soldatov commented on PHOENIX-4586:
--

[~maryannxue] Wow, that was quick :) I'll test it. Actually, I was thinking 
about adding such kind of check to avoid server-side execution but was not sure 
whether it's enough (comparing to the case with IN, which is similar to = ANY,  
where we generate semi-join).

> UPSERT SELECT doesn't take in account comparison operators for subqueries.
> --
>
> Key: PHOENIX-4586
> URL: https://issues.apache.org/jira/browse/PHOENIX-4586
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: Maryann Xue
>Priority: Critical
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4586.patch
>
>
> If upsert select has a where condition that is using any comparison operator 
> (including ANY/SOME/etc), the whole WHERE clause just ignored. Table:
> {noformat}
> create table T (id integer primary key, i1 integer);
> upsert into T values (1,1);
> upsert into T values (2,2);
> {noformat}
> Query that should not upsert anything because we have a condition in where 
> that I1 should be greater than any value we already have as well as not 
> existing ID:
> {noformat}
> 0: jdbc:phoenix:> upsert into T select id, 4 from T where id = 3 AND i1 > 
> (select i1 from T);
> 2 rows affected (0.02 seconds)
> 0: jdbc:phoenix:> select * from T;
> +-+-+
> | ID  | I1  |
> +-+-+
> | 1   | 4   |
> | 2   | 4   |
> +-+-+
> 2 rows selected (0.014 seconds)
> {noformat}
> Now with ANY.  Should not upsert anything as well because ID is [1,2], while 
> I1 are all '4':
> {noformat}
> 0: jdbc:phoenix:> upsert into T select id, 5 from T where id = 2 AND i1 = ANY 
> (select ID from T);
> 2 rows affected (0.016 seconds)
> 0: jdbc:phoenix:> select * from T;
> +-+-+
> | ID  | I1  |
> +-+-+
> | 1   | 5   |
> | 2   | 5   |
> +-+-+
> 2 rows selected (0.013 seconds)
> {noformat}
> A similar query with IN works just fine:
> {noformat}
> 0: jdbc:phoenix:> upsert into T select id, 6 from T where id = 2 AND i1 IN 
> (select ID from T);
> No rows affected (0.094 seconds)
> 0: jdbc:phoenix:> select * from T;
> +-+-+
> | ID  | I1  |
> +-+-+
> | 1   | 5   |
> | 2   | 5   |
> +-+-+
> 2 rows selected (0.014 seconds)
> {noformat}
> The reason for this behavior is that for IN we convert subselect to semi-join 
> and execute upsert on the client side.  For comparisons, we don't perform any 
> transformations and query is considered flat and finally executed on the 
> server side.  Not sure why, but we also completely ignore the second 
> condition in WHERE clause as well and that may lead to a serious data loss. 
> [~jamestaylor], [~maryannxue] any thoughts or suggestions how to fix that are 
> really appreciated. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4586) UPSERT SELECT doesn't take in account comparison operators for subqueries.

2018-02-06 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created PHOENIX-4586:


 Summary: UPSERT SELECT doesn't take in account comparison 
operators for subqueries.
 Key: PHOENIX-4586
 URL: https://issues.apache.org/jira/browse/PHOENIX-4586
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0
Reporter: Sergey Soldatov
 Fix For: 4.14.0


If upsert select has a where condition that is using any comparison operator 
(including ANY/SOME/etc), the whole WHERE clause just ignored. Table:
{noformat}
create table T (id integer primary key, i1 integer);
upsert into T values (1,1);
upsert into T values (2,2);
{noformat}
Query that should not upsert anything because we have a condition in where that 
I1 should be greater than any value we already have as well as not existing ID:
{noformat}
0: jdbc:phoenix:> upsert into T select id, 4 from T where id = 3 AND i1 > 
(select i1 from T);
2 rows affected (0.02 seconds)
0: jdbc:phoenix:> select * from T;
+-+-+
| ID  | I1  |
+-+-+
| 1   | 4   |
| 2   | 4   |
+-+-+
2 rows selected (0.014 seconds)
{noformat}
Now with ANY.  Should not upsert anything as well because ID is [1,2], while I1 
are all '4':
{noformat}
0: jdbc:phoenix:> upsert into T select id, 5 from T where id = 2 AND i1 = ANY 
(select ID from T);
2 rows affected (0.016 seconds)
0: jdbc:phoenix:> select * from T;
+-+-+
| ID  | I1  |
+-+-+
| 1   | 5   |
| 2   | 5   |
+-+-+
2 rows selected (0.013 seconds)
{noformat}
A similar query with IN works just fine:
{noformat}
0: jdbc:phoenix:> upsert into T select id, 6 from T where id = 2 AND i1 IN 
(select ID from T);
No rows affected (0.094 seconds)
0: jdbc:phoenix:> select * from T;
+-+-+
| ID  | I1  |
+-+-+
| 1   | 5   |
| 2   | 5   |
+-+-+
2 rows selected (0.014 seconds)
{noformat}

The reason for this behavior is that for IN we convert subselect to semi-join 
and execute upsert on the client side.  For comparisons, we don't perform any 
transformations and query is considered flat and finally executed on the server 
side.  Not sure why, but we also completely ignore the second condition in 
WHERE clause as well and that may lead to a serious data loss. 
[~jamestaylor], [~maryannxue] any thoughts or suggestions how to fix that are 
really appreciated. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4571) phoenix-queryserver should directly depend on servlet-api

2018-02-05 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16353093#comment-16353093
 ] 

Sergey Soldatov commented on PHOENIX-4571:
--

[~elserj]  I don't see any problems with it.

> phoenix-queryserver should directly depend on servlet-api
> -
>
> Key: PHOENIX-4571
> URL: https://issues.apache.org/jira/browse/PHOENIX-4571
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 5.0.0, 4.14.0
>
> Attachments: PHOENIX-4571.patch
>
>
> I think we missed explicitly adding the javax.servlet-api dependency on the 
> back of [~Wancy]'s work in PHOENIX-3598.
> We're using this directly in the PQS code, so we should have it as a direct 
> dependency. Noticed this as PQS ITs are failing on the 5.x branch. Should be 
> a simple fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4583) Hive IT tests may fail during mini cluster initialization

2018-02-05 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov resolved PHOENIX-4583.
--
Resolution: Duplicate

My bad missed that it's already fixed.

> Hive IT tests may fail during mini cluster initialization
> -
>
> Key: PHOENIX-4583
> URL: https://issues.apache.org/jira/browse/PHOENIX-4583
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Sergey Soldatov
>Priority: Major
>
> If both HiveMapReduceIT and HiveTezIT are running in parallel, they may fail 
> with an exception during DFS init:
> {noformat}
> java.io.FileNotFoundException: No valid image files found
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImageTransactionalStorageInspector.getLatestImages(FSImageTransactionalStorageInspector.java:165)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:618)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:289)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1077)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:724)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:697)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:761)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:1001)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:985)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1710)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1155)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1030)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:754)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:624)
>   at 
> org.apache.hadoop.hive.shims.Hadoop23Shims.getMiniDfs(Hadoop23Shims.java:514)
>   at org.apache.phoenix.hive.HiveTestUtil.(HiveTestUtil.java:303)
>   at org.apache.phoenix.hive.HiveTestUtil.(HiveTestUtil.java:261)
>   at 
> org.apache.phoenix.hive.BaseHivePhoenixStoreIT.setup(BaseHivePhoenixStoreIT.java:85)
>   at 
> org.apache.phoenix.hive.HiveMapReduceIT.setUpBeforeClass(HiveMapReduceIT.java:31)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:367)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:274)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:161)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:290)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:242)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:121)
> {noformat} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   4   5   6   7   8   >