[jira] [Reopened] (PHOENIX-5122) PHOENIX-4322 breaks client backward compatibility

2019-02-14 Thread Jacob Isaac (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reopened PHOENIX-5122:
--
  Assignee: Jacob Isaac

The fix applied in 
[PHOENIX-4322|https://issues.apache.org/jira/browse/PHOENIX-4322] namely to the 
evaluate() method in RowValueConstructorExpression causes the client and server 
to evaluate an expression containing a RVCE differently across versions. 
(specifically any client versions < 4.14 ) 
For e.g an InListExpression, queries with IN clause => (?, ?) IN ((a, b), (c, 
d)) will fail since the client pads an extra trailing separator byte and server 
will trim that trailing separator byte.

> PHOENIX-4322 breaks client backward compatibility
> -
>
> Key: PHOENIX-5122
> URL: https://issues.apache.org/jira/browse/PHOENIX-5122
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Blocker
>
> Scenario :
> *4.13 client -> 4.14.1 server*
> Connected to: Phoenix (version 4.13)
> Driver: PhoenixEmbeddedDriver (version 4.13)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 135/135 (100%) Done
> Done
> sqlline version 1.1.9
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> CREATE table P_T02 (oid VARCHAR NOT NULL, code 
> VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC));
> No rows affected (1.31 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0001', 
> 'v0001');
> 1 row affected (0.033 seconds)
> 0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0002', 
> 'v0002');
> 1 row affected (0.004 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> select * from P_T02 where (oid, code) IN 
> (('0001', 'v0001'), ('0002', 'v0002'));
> +--+--+
> | OID | CODE |
> +--+--+
> +--+--+
> {color:#FF}+*No rows selected (0.033 seconds)*+{color}
> 0: jdbc:phoenix:localhost> select * from P_T02 ;
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.016 seconds)
> 0: jdbc:phoenix:localhost>
>  
> *4.14.1 client -> 4.14.1 server* 
> Connected to: Phoenix (version 4.14)
> Driver: PhoenixEmbeddedDriver (version 4.14)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 133/133 (100%) Done
> Done
> sqlline version 1.1.9
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> CREATE table P_T01 (oid VARCHAR NOT NULL, code 
> VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC));
> No rows affected (1.273 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> upsert into P_T01 (oid, code) values ('0001', 
> 'v0001');
> 1 row affected (0.056 seconds)
> 0: jdbc:phoenix:localhost> upsert into P_T01 (oid, code) values ('0002', 
> 'v0002');
> 1 row affected (0.004 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> select * from P_T01 where (oid, code) IN 
> (('0001', 'v0001'), ('0002', 'v0002'));
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.051 seconds)
> 0: jdbc:phoenix:localhost> select * from P_T01 ;
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.017 seconds)
> 0: jdbc:phoenix:localhost>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5140) TableNotFoundException occurs when we create local asynchronous index

2019-02-14 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

张延召 updated PHOENIX-5140:
-
Description: 
First I create the table and insert the data:

 

 create table DMP.DMP_INDEX_TEST2 (id varchar not null primary key,name 
varchar,age varchar);
 upsert into DMP.DMP_INDEX_TEST2 values('id01','name01','age01');

 

The asynchronous index is then created:

 

create local index if not exists TMP_INDEX_DMP_TEST2 on DMP.DMP_INDEX_TEST2 
(name) ASYNC;

 

Because kerberos is enabled,So I need kinit HBase principal first,Then execute 
the following command:

 

HADOOP_CLASSPATH="/etc/hbase/conf" hadoop jar 
/usr/hdp/3.0.0.0-1634/phoenix/phoenix-client.jar 
org.apache.phoenix.mapreduce.index.IndexTool --schema DMP --data-table 
DMP_INDEX_TEST2 --index-table TMP_INDEX_DMP_TEST2 --output-path /hbase-backup2

 

But I got the following error:

 

Error: java.lang.RuntimeException: 
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
undefined. tableName=DMP.DMP_INDEX_TEST2
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:124)
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:50)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
 Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 
(42M03): Table undefined. tableName=DMP.DMP_INDEX_TEST2
 at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableRegionLocation(ConnectionQueryServicesImpl.java:4544)
 at 
org.apache.phoenix.query.DelegateConnectionQueryServices.getTableRegionLocation(DelegateConnectionQueryServices.java:312)
 at org.apache.phoenix.compile.UpsertCompiler.setValues(UpsertCompiler.java:163)
 at 
org.apache.phoenix.compile.UpsertCompiler.access$500(UpsertCompiler.java:118)
 at 
org.apache.phoenix.compile.UpsertCompiler$UpsertValuesMutationPlan.execute(UpsertCompiler.java:1202)
 at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
 at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
 at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
 at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
 at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183)
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:103)
 ... 9 more

 

I can query this table and have access to it,It works well:

 

select * from DMP.DMP_INDEX_TEST2;
 select * from DMP.TMP_INDEX_DMP_TEST2;
 drop table DMP.DMP_INDEX_TEST2;

 

But why did my MR task make this mistake? Any Suggestions from anyone

  was:
First I create the table and insert the data:

 create table DMP.DMP_INDEX_TEST2 (id varchar not null primary key,name 
varchar,age varchar);
 upsert into DMP.DMP_INDEX_TEST2 values('id01','name01','age01');

The asynchronous index is then created:

create local index if not exists TMP_INDEX_DMP_TEST2 on DMP.DMP_INDEX_TEST2 
(name) ASYNC;

Because kerberos is enabled,So I need kinit HBase principal first,Then execute 
the following command:

HADOOP_CLASSPATH="/etc/hbase/conf" hadoop jar 
/usr/hdp/3.0.0.0-1634/phoenix/phoenix-client.jar 
org.apache.phoenix.mapreduce.index.IndexTool --schema DMP --data-table 
DMP_INDEX_TEST2 --index-table TMP_INDEX_DMP_TEST2 --output-path /hbase-backup2

 But I got the following error:

Error: java.lang.RuntimeException: 
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
undefined. tableName=DMP.DMP_INDEX_TEST2
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:124)
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:50)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 

[jira] [Updated] (PHOENIX-5140) TableNotFoundException occurs when we create local asynchronous index

2019-02-14 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

张延召 updated PHOENIX-5140:
-
Environment: > HDP : 3.0.0.0, HBase : 2.0.0,phoenix : 5.0.0 and hadoop : 
3.1.0  (was: My HDP version is 3.0.0.0, HBase version is 2.0.0,phoenix version 
is 5.0.0 and hadoop version is 3.1.0)
 Labels: IndexTool localIndex tableUndefined  (was: )
Description: 
First I create the table and insert the data:

 create table DMP.DMP_INDEX_TEST2 (id varchar not null primary key,name 
varchar,age varchar);
 upsert into DMP.DMP_INDEX_TEST2 values('id01','name01','age01');

The asynchronous index is then created:

create local index if not exists TMP_INDEX_DMP_TEST2 on DMP.DMP_INDEX_TEST2 
(name) ASYNC;

Because kerberos is enabled,So I need kinit HBase principal first,Then execute 
the following command:

HADOOP_CLASSPATH="/etc/hbase/conf" hadoop jar 
/usr/hdp/3.0.0.0-1634/phoenix/phoenix-client.jar 
org.apache.phoenix.mapreduce.index.IndexTool --schema DMP --data-table 
DMP_INDEX_TEST2 --index-table TMP_INDEX_DMP_TEST2 --output-path /hbase-backup2

 But I got the following error:

Error: java.lang.RuntimeException: 
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
undefined. tableName=DMP.DMP_INDEX_TEST2
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:124)
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:50)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
 Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 
(42M03): Table undefined. tableName=DMP.DMP_INDEX_TEST2
 at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableRegionLocation(ConnectionQueryServicesImpl.java:4544)
 at 
org.apache.phoenix.query.DelegateConnectionQueryServices.getTableRegionLocation(DelegateConnectionQueryServices.java:312)
 at org.apache.phoenix.compile.UpsertCompiler.setValues(UpsertCompiler.java:163)
 at 
org.apache.phoenix.compile.UpsertCompiler.access$500(UpsertCompiler.java:118)
 at 
org.apache.phoenix.compile.UpsertCompiler$UpsertValuesMutationPlan.execute(UpsertCompiler.java:1202)
 at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
 at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
 at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
 at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
 at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183)
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:103)
 ... 9 more

I can query this table and have access to it,It works well:

select * from DMP.DMP_INDEX_TEST2;
 select * from DMP.TMP_INDEX_DMP_TEST2;
 drop table DMP.DMP_INDEX_TEST2;

 But why did my MR task make this mistake? Any Suggestions from anyone

  was:
First I create the table and insert the data:

 

create table DMP.DMP_INDEX_TEST2 (id varchar not null primary key,name 
varchar,age varchar);
 upsert into DMP.DMP_INDEX_TEST2 values('id01','name01','age01');



 

The asynchronous index is then created:

create local index if not exists TMP_INDEX_DMP_TEST2 on DMP.DMP_INDEX_TEST2 
(name) ASYNC;

Because kerberos is enabled,So I need kinit HBase principal first,Then execute 
the following command;

HADOOP_CLASSPATH="/etc/hbase/conf" hadoop jar 
/usr/hdp/3.0.0.0-1634/phoenix/phoenix-client.jar 
org.apache.phoenix.mapreduce.index.IndexTool --schema DMP --data-table 
DMP_INDEX_TEST2 --index-table TMP_INDEX_DMP_TEST2 --output-path /hbase-backup2

 

But I got the following error:

Error: java.lang.RuntimeException: 
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
undefined. tableName=DMP.DMP_INDEX_TEST2
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:124)
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:50)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
 at 

[jira] [Updated] (PHOENIX-5140) Index Tool with schema table undefined

2019-02-14 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

张延召 updated PHOENIX-5140:
-
Description: 
First I create the table and insert the data:

 

create table DMP.DMP_INDEX_TEST2 (id varchar not null primary key,name 
varchar,age varchar);
 upsert into DMP.DMP_INDEX_TEST2 values('id01','name01','age01');



 

The asynchronous index is then created:

create local index if not exists TMP_INDEX_DMP_TEST2 on DMP.DMP_INDEX_TEST2 
(name) ASYNC;

Because kerberos is enabled,So I need kinit HBase principal first,Then execute 
the following command;

HADOOP_CLASSPATH="/etc/hbase/conf" hadoop jar 
/usr/hdp/3.0.0.0-1634/phoenix/phoenix-client.jar 
org.apache.phoenix.mapreduce.index.IndexTool --schema DMP --data-table 
DMP_INDEX_TEST2 --index-table TMP_INDEX_DMP_TEST2 --output-path /hbase-backup2

 

But I got the following error:

Error: java.lang.RuntimeException: 
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
undefined. tableName=DMP.DMP_INDEX_TEST2
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:124)
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:50)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
 Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 
(42M03): Table undefined. tableName=DMP.DMP_INDEX_TEST2
 at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableRegionLocation(ConnectionQueryServicesImpl.java:4544)
 at 
org.apache.phoenix.query.DelegateConnectionQueryServices.getTableRegionLocation(DelegateConnectionQueryServices.java:312)
 at org.apache.phoenix.compile.UpsertCompiler.setValues(UpsertCompiler.java:163)
 at 
org.apache.phoenix.compile.UpsertCompiler.access$500(UpsertCompiler.java:118)
 at 
org.apache.phoenix.compile.UpsertCompiler$UpsertValuesMutationPlan.execute(UpsertCompiler.java:1202)
 at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
 at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
 at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
 at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
 at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
 at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
 at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183)
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:103)
 ... 9 more

I can query this table and have access to it,It works well:

select * from DMP.DMP_INDEX_TEST2;
 select * from DMP.TMP_INDEX_DMP_TEST2;
 drop table DMP.DMP_INDEX_TEST2;

 

But why did my MR task make this mistake? Any Suggestions from anyone

  was:
First I create the table and insert the data:

create table DMP.DMP_INDEX_TEST2 (id varchar not null primary key,name 
varchar,age varchar);
 upsert into DMP.DMP_INDEX_TEST2 values('id01','name01','age01');

The asynchronous index is then created:

create local index if not exists TMP_INDEX_DMP_TEST2 on DMP.DMP_INDEX_TEST2 
(name) ASYNC;

Because kerberos is enabled,So I need kinit HBase principal first,Then execute 
the following command;

HADOOP_CLASSPATH="/etc/hbase/conf" hadoop jar 
/usr/hdp/3.0.0.0-1634/phoenix/phoenix-client.jar 
org.apache.phoenix.mapreduce.index.IndexTool --schema DMP --data-table 
DMP_INDEX_TEST2 --index-table TMP_INDEX_DMP_TEST2 --output-path /hbase-backup2

 

But I got the following error:

Error: java.lang.RuntimeException: 
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
undefined. tableName=DMP.DMP_INDEX_TEST2
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:124)
 at 
org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:50)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
 at 

[jira] [Updated] (PHOENIX-5124) Add config to enable PropertyPolicyProvider

2019-02-14 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5124:

Attachment: PHOENIX-5124-4.x-HBase-1.3-v3.patch

> Add config to enable PropertyPolicyProvider 
> 
>
> Key: PHOENIX-5124
> URL: https://issues.apache.org/jira/browse/PHOENIX-5124
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5124-4.x-HBase-1.3-v2.patch, 
> PHOENIX-5124-4.x-HBase-1.3-v3.patch, PHOENIX-5124-4.x-HBase-1.3.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (PHOENIX-5124) Add config to enable PropertyPolicyProvider

2019-02-14 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reopened PHOENIX-5124:
-

> Add config to enable PropertyPolicyProvider 
> 
>
> Key: PHOENIX-5124
> URL: https://issues.apache.org/jira/browse/PHOENIX-5124
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5124-4.x-HBase-1.3-v2.patch, 
> PHOENIX-5124-4.x-HBase-1.3.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5124) Add config to enable PropertyPolicyProvider

2019-02-14 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5124:

Issue Type: New Feature  (was: Bug)
   Summary: Add config to enable PropertyPolicyProvider   (was: 
PropertyPolicyProvider should not evaluate default hbase config properties)

> Add config to enable PropertyPolicyProvider 
> 
>
> Key: PHOENIX-5124
> URL: https://issues.apache.org/jira/browse/PHOENIX-5124
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5124-4.x-HBase-1.3-v2.patch, 
> PHOENIX-5124-4.x-HBase-1.3.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3710) Cannot use lowername data table name with indextool

2019-02-14 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-3710:

Attachment: PHOENIX-3710.002.patch

> Cannot use lowername data table name with indextool
> ---
>
> Key: PHOENIX-3710
> URL: https://issues.apache.org/jira/browse/PHOENIX-3710
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Matthew Shipton
>Assignee: Josh Elser
>Priority: Minor
> Attachments: PHOENIX-3710.002.patch, PHOENIX-3710.patch, test.sh, 
> test.sql
>
>
> {code}
> hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table 
> \"my_lowcase_table\" --index-table INDEX_TABLE --output-path /tmp/some_path
> {code}
> results in:
> {code}
> java.lang.IllegalArgumentException:  INDEX_TABLE is not an index table for 
> MY_LOWCASE_TABLE
> {code}
> This is despite the data table being explictly lowercased.
> Appears to be referring to the lowcase table, not the uppercase version.
> Workaround exists by changing the tablename, but this is not always feasible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5137) Index Rebuilder scan increases data table region split time

2019-02-14 Thread Kiran Kumar Maturi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Maturi updated PHOENIX-5137:

Attachment: PHOENIX-5137-4.14-Hbase-1.3.01.patch

> Index Rebuilder scan increases data table region split time
> ---
>
> Key: PHOENIX-5137
> URL: https://issues.apache.org/jira/browse/PHOENIX-5137
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Attachments: PHOENIX-5137-4.14-Hbase-1.3.01.patch
>
>
> [~lhofhansl] [~vincentpoon] [~tdsilva] please review
> In order to differentiate between the index rebuilder retries  
> (UngroupedAggregateRegionObserver.rebuildIndices()) and commits that happen 
> in the loop of UngroupedAggregateRegionObserver.doPostScannerOpen() as part 
> of  PHOENIX-4600 blockingMemstoreSize was set to -1 for rebuildIndices;
> {code:java}
> commitBatchWithRetries(region, mutations, -1);{code}
> blocks the region split as the check for region closing does not happen  
> blockingMemstoreSize > 0
> {code:java}
> for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i < 30; i++) {
>   try{
>checkForRegionClosing();
>
> {code}
> Plan is to have the check for region closing at least once before committing 
> the batch
> {code:java}
> int i = 0;
> do {
>try {
>  if (i > 0) {
>  Thread.sleep(100); 
>  }
>  checkForRegionClosing();   
> } catch (InterruptedException e) {
> Thread.currentThread().interrupt();
> throw new IOException(e);
> }
> }while (blockingMemstoreSize > 0 && region.getMemstoreSize() > 
> blockingMemstoreSize && i++ < 30);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)