[jira] [Commented] (PHOENIX-2715) Query Log

2018-05-24 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490257#comment-16490257
 ] 

Ankit Singhal commented on PHOENIX-2715:


bq.Next RC coming tomorrow am.
Pushed the change now. (y)

> Query Log
> -
>
> Key: PHOENIX-2715
> URL: https://issues.apache.org/jira/browse/PHOENIX-2715
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Nick Dimiduk
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-2715.patch, PHOENIX-2715_master.patch, 
> PHOENIX-2715_master_V1.patch, PHOENIX-2715_master_V2.patch
>
>
> One useful feature of other database systems is the query log. It allows the 
> DBA to review the queries run, who's run them, time taken,  This serves 
> both as an audit and also as a source of "ground truth" for performance 
> optimization. For instance, which columns should be indexed. It may also 
> serve as the foundation for automated performance recommendations/actions.
> What queries are being run is the first piece. Have this data tied into 
> tracing results and perhaps client-side metrics (PHOENIX-1819) becomes very 
> useful.
> This might take the form of clients writing data to a new system table, but 
> other implementation suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-2715) Query Log

2018-05-24 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490252#comment-16490252
 ] 

James Taylor commented on PHOENIX-2715:
---

Yes, please [~ankit.singhal]. Next RC coming tomorrow am.

> Query Log
> -
>
> Key: PHOENIX-2715
> URL: https://issues.apache.org/jira/browse/PHOENIX-2715
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Nick Dimiduk
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-2715.patch, PHOENIX-2715_master.patch, 
> PHOENIX-2715_master_V1.patch, PHOENIX-2715_master_V2.patch
>
>
> One useful feature of other database systems is the query log. It allows the 
> DBA to review the queries run, who's run them, time taken,  This serves 
> both as an audit and also as a source of "ground truth" for performance 
> optimization. For instance, which columns should be indexed. It may also 
> serve as the foundation for automated performance recommendations/actions.
> What queries are being run is the first piece. Have this data tied into 
> tracing results and perhaps client-side metrics (PHOENIX-1819) becomes very 
> useful.
> This might take the form of clients writing data to a new system table, but 
> other implementation suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-2715) Query Log

2018-05-24 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490233#comment-16490233
 ] 

Ankit Singhal edited comment on PHOENIX-2715 at 5/25/18 5:01 AM:
-

bq. Looking at the code I see the default is OFF, so something is not working 
right, unless I misunderstood something.
By mistake, I have committed bin/hbase-site.xml with logging.level to DEBUG. 
Let me remove that.
{code}
https://github.com/apache/phoenix/blob/master/bin/hbase-site.xml
{code} 

bq. Everything but OFF is a bad default, IMHO.
Any suggestions, It will be an easy change, I can do it and commit along with 
addendum for hbase-site.xml?


was (Author: an...@apache.org):
bq. Looking at the code I see the default is OFF, so something is not working 
right, unless I misunderstood something.
By mistake, I have committed bin/hbase-site.xml with logging.level to DEBUG. 
Let me remove that.
{code}
https://github.com/apache/phoenix/blob/master/bin/hbase-site.xml
{code} 

bq. Everything but OFF is a bad default, IMHO.
Any suggestions, It will be an easy change before I commit addendum for 
hbase-site.xml?

> Query Log
> -
>
> Key: PHOENIX-2715
> URL: https://issues.apache.org/jira/browse/PHOENIX-2715
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Nick Dimiduk
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-2715.patch, PHOENIX-2715_master.patch, 
> PHOENIX-2715_master_V1.patch, PHOENIX-2715_master_V2.patch
>
>
> One useful feature of other database systems is the query log. It allows the 
> DBA to review the queries run, who's run them, time taken,  This serves 
> both as an audit and also as a source of "ground truth" for performance 
> optimization. For instance, which columns should be indexed. It may also 
> serve as the foundation for automated performance recommendations/actions.
> What queries are being run is the first piece. Have this data tied into 
> tracing results and perhaps client-side metrics (PHOENIX-1819) becomes very 
> useful.
> This might take the form of clients writing data to a new system table, but 
> other implementation suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-2715) Query Log

2018-05-24 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490233#comment-16490233
 ] 

Ankit Singhal commented on PHOENIX-2715:


bq. Looking at the code I see the default is OFF, so something is not working 
right, unless I misunderstood something.
By mistake, I have committed bin/hbase-site.xml with logging.level to DEBUG. 
Let me remove that.
{code}
https://github.com/apache/phoenix/blob/master/bin/hbase-site.xml
{code} 

bq. Everything but OFF is a bad default, IMHO.
Any suggestions, It will be an easy change before I commit addendum for 
hbase-site.xml?

> Query Log
> -
>
> Key: PHOENIX-2715
> URL: https://issues.apache.org/jira/browse/PHOENIX-2715
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Nick Dimiduk
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-2715.patch, PHOENIX-2715_master.patch, 
> PHOENIX-2715_master_V1.patch, PHOENIX-2715_master_V2.patch
>
>
> One useful feature of other database systems is the query log. It allows the 
> DBA to review the queries run, who's run them, time taken,  This serves 
> both as an audit and also as a source of "ground truth" for performance 
> optimization. For instance, which columns should be indexed. It may also 
> serve as the foundation for automated performance recommendations/actions.
> What queries are being run is the first piece. Have this data tied into 
> tracing results and perhaps client-side metrics (PHOENIX-1819) becomes very 
> useful.
> This might take the form of clients writing data to a new system table, but 
> other implementation suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4745) Update Tephra version to 0.14.0-incubating

2018-05-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490191#comment-16490191
 ] 

Hudson commented on PHOENIX-4745:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1904 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1904/])
PHOENIX-4745 Update Tephra version to 0.14.0-incubating (jtaylor: rev 
b0b5456ff449c16fe750cc90248798dbf47e647d)
* (edit) pom.xml
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java


> Update Tephra version to 0.14.0-incubating
> --
>
> Key: PHOENIX-4745
> URL: https://issues.apache.org/jira/browse/PHOENIX-4745
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4745.patch
>
>
> Update to Tephra 0.14.0-incubating, mainly for HBase 1.4 and HBase 2.0 compat 
> modules.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-2715) Query Log

2018-05-24 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490153#comment-16490153
 ] 

Lars Hofhansl edited comment on PHOENIX-2715 at 5/25/18 2:57 AM:
-

The default is supposed to the OFF, right?

I just tried with a vanilla build from Phoenix' master branch and I see 
SYSTEM.LOG is being written to.

Looking at the code I see the default *is* OFF, so something is not working 
right, unless I misunderstood something.

Everything but OFF is a bad default, IMHO.

[~an...@apache.org]


was (Author: lhofhansl):
The default is supposed to the OFF, right?

I just true with a vanilla build from Phoenix' master branch and I see 
SYSTEM.LOG is being written to.

Looking at the code I see the default *is* OFF, so something is not working 
right, unless I misunderstood something.

Everything but OFF is a bad default, IMHO.

[~an...@apache.org]

> Query Log
> -
>
> Key: PHOENIX-2715
> URL: https://issues.apache.org/jira/browse/PHOENIX-2715
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Nick Dimiduk
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-2715.patch, PHOENIX-2715_master.patch, 
> PHOENIX-2715_master_V1.patch, PHOENIX-2715_master_V2.patch
>
>
> One useful feature of other database systems is the query log. It allows the 
> DBA to review the queries run, who's run them, time taken,  This serves 
> both as an audit and also as a source of "ground truth" for performance 
> optimization. For instance, which columns should be indexed. It may also 
> serve as the foundation for automated performance recommendations/actions.
> What queries are being run is the first piece. Have this data tied into 
> tracing results and perhaps client-side metrics (PHOENIX-1819) becomes very 
> useful.
> This might take the form of clients writing data to a new system table, but 
> other implementation suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4745) Update Tephra version to 0.14.0-incubating

2018-05-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490150#comment-16490150
 ] 

Hudson commented on PHOENIX-4745:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #142 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/142/])
PHOENIX-4745 Update Tephra version to 0.14.0-incubating (jtaylor: rev 
f3e49f38e91a00d94e7142739ecfea7fa38dd841)
* (edit) pom.xml
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java


> Update Tephra version to 0.14.0-incubating
> --
>
> Key: PHOENIX-4745
> URL: https://issues.apache.org/jira/browse/PHOENIX-4745
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4745.patch
>
>
> Update to Tephra 0.14.0-incubating, mainly for HBase 1.4 and HBase 2.0 compat 
> modules.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-2715) Query Log

2018-05-24 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490153#comment-16490153
 ] 

Lars Hofhansl commented on PHOENIX-2715:


The default is supposed to the OFF, right?

I just true with a vanilla build from Phoenix' master branch and I see 
SYSTEM.LOG is being written to.

Looking at the code I see the default *is* OFF, so something is not working 
right, unless I misunderstood something.

Everything but OFF is a bad default, IMHO.

[~an...@apache.org]

> Query Log
> -
>
> Key: PHOENIX-2715
> URL: https://issues.apache.org/jira/browse/PHOENIX-2715
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Nick Dimiduk
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-2715.patch, PHOENIX-2715_master.patch, 
> PHOENIX-2715_master_V1.patch, PHOENIX-2715_master_V2.patch
>
>
> One useful feature of other database systems is the query log. It allows the 
> DBA to review the queries run, who's run them, time taken,  This serves 
> both as an audit and also as a source of "ground truth" for performance 
> optimization. For instance, which columns should be indexed. It may also 
> serve as the foundation for automated performance recommendations/actions.
> What queries are being run is the first piece. Have this data tied into 
> tracing results and perhaps client-side metrics (PHOENIX-1819) becomes very 
> useful.
> This might take the form of clients writing data to a new system table, but 
> other implementation suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4580) Upgrade to Tephra 0.14.0-incubating for HBase 2.0 support

2018-05-24 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490071#comment-16490071
 ] 

James Taylor commented on PHOENIX-4580:
---

FYI, I've committed PHOENIX-4745 that moves the 4.x branches to Tephra 0.14.0 
release. Feel free to commit this as well, [~an...@apache.org].

> Upgrade to Tephra  0.14.0-incubating  for HBase 2.0 support
> ---
>
> Key: PHOENIX-4580
> URL: https://issues.apache.org/jira/browse/PHOENIX-4580
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4580.patch
>
>
> TEPHRA-272 has the necessary changes that Phoenix needs but we need to get a 
> release from the Tephra folks first.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4745) Update Tephra version to 0.14.0-incubating

2018-05-24 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4745:
--
Fix Version/s: (was: 5.0.0)

> Update Tephra version to 0.14.0-incubating
> --
>
> Key: PHOENIX-4745
> URL: https://issues.apache.org/jira/browse/PHOENIX-4745
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4745.patch
>
>
> Update to Tephra 0.14.0-incubating, mainly for HBase 1.4 and HBase 2.0 compat 
> modules.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4745) Update Tephra version to 0.14.0-incubating

2018-05-24 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4745.
---
Resolution: Fixed

> Update Tephra version to 0.14.0-incubating
> --
>
> Key: PHOENIX-4745
> URL: https://issues.apache.org/jira/browse/PHOENIX-4745
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4745.patch
>
>
> Update to Tephra 0.14.0-incubating, mainly for HBase 1.4 and HBase 2.0 compat 
> modules.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4741) Shade disruptor dependency

2018-05-24 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4741:
---
Attachment: (was: PHOENIX-4741.patch)

> Shade disruptor dependency 
> ---
>
> Key: PHOENIX-4741
> URL: https://issues.apache.org/jira/browse/PHOENIX-4741
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Jungtaek Lim
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
>
> We should shade disruptor dependency to avoid conflict with the versions used 
> by the other framework like storm , hive etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-1567) Publish Phoenix-Client & Phoenix-Server jars into Maven Repo

2018-05-24 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490021#comment-16490021
 ] 

Ankit Singhal commented on PHOENIX-1567:


Thanks [~elserj], committing the attached patch so that phoenix-client and 
phoenix-server jar will starting going properly in respective maven directory.

> Publish Phoenix-Client & Phoenix-Server jars into Maven Repo
> 
>
> Key: PHOENIX-1567
> URL: https://issues.apache.org/jira/browse/PHOENIX-1567
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Jeffrey Zhong
>Priority: Major
> Attachments: PHOENIX-1567.patch
>
>
> Phoenix doesn't publish Phoenix Client & Server jars into Maven repository. 
> This make things quite hard for down steam projects/applications to use maven 
> to resolve dependencies.
> I tried to modify the pom.xml under phoenix-assembly while it shows the 
> following. 
> {noformat}
> [INFO] Installing 
> /Users/jzhong/work/phoenix_apache/checkins/phoenix/phoenix-assembly/target/phoenix-4.3.0-SNAPSHOT-client.jar
>  
> to 
> /Users/jzhong/.m2/repository/org/apache/phoenix/phoenix-assembly/4.3.0-SNAPSHOT/phoenix-assembly-4.3.0-SNAPSHOT-client.jar
> {noformat}
> Basically the jar published to maven repo will become  
> phoenix-assembly-4.3.0-SNAPSHOT-client.jar or 
> phoenix-assembly-4.3.0-SNAPSHOT-server.jar
> The artifact id "phoenix-assembly" has to be the prefix of the names of jars.
> Therefore, the possible solutions are:
> 1) rename current client & server jar to phoenix-assembly-clinet/server.jar 
> to match the jars published to maven repo.
> 2) rename phoenix-assembly to something more meaningful and rename our client 
> & server jars accordingly
> 3) split phoenix-assembly and move the corresponding artifacts into 
> phoenix-client & phoenix-server folders. Phoenix-assembly will only create 
> tar ball files.
> [~giacomotaylor], [~apurtell] or other maven experts: Any suggestion on this? 
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-1567) Publish Phoenix-Client & Phoenix-Server jars into Maven Repo

2018-05-24 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-1567:
---
Attachment: PHOENIX-1567.patch

> Publish Phoenix-Client & Phoenix-Server jars into Maven Repo
> 
>
> Key: PHOENIX-1567
> URL: https://issues.apache.org/jira/browse/PHOENIX-1567
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Jeffrey Zhong
>Priority: Major
> Attachments: PHOENIX-1567.patch
>
>
> Phoenix doesn't publish Phoenix Client & Server jars into Maven repository. 
> This make things quite hard for down steam projects/applications to use maven 
> to resolve dependencies.
> I tried to modify the pom.xml under phoenix-assembly while it shows the 
> following. 
> {noformat}
> [INFO] Installing 
> /Users/jzhong/work/phoenix_apache/checkins/phoenix/phoenix-assembly/target/phoenix-4.3.0-SNAPSHOT-client.jar
>  
> to 
> /Users/jzhong/.m2/repository/org/apache/phoenix/phoenix-assembly/4.3.0-SNAPSHOT/phoenix-assembly-4.3.0-SNAPSHOT-client.jar
> {noformat}
> Basically the jar published to maven repo will become  
> phoenix-assembly-4.3.0-SNAPSHOT-client.jar or 
> phoenix-assembly-4.3.0-SNAPSHOT-server.jar
> The artifact id "phoenix-assembly" has to be the prefix of the names of jars.
> Therefore, the possible solutions are:
> 1) rename current client & server jar to phoenix-assembly-clinet/server.jar 
> to match the jars published to maven repo.
> 2) rename phoenix-assembly to something more meaningful and rename our client 
> & server jars accordingly
> 3) split phoenix-assembly and move the corresponding artifacts into 
> phoenix-client & phoenix-server folders. Phoenix-assembly will only create 
> tar ball files.
> [~giacomotaylor], [~apurtell] or other maven experts: Any suggestion on this? 
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4681) Test existence of SYSTEM:CATALOG before attempting to create it

2018-05-24 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490006#comment-16490006
 ] 

Ankit Singhal commented on PHOENIX-4681:


[~slamendola2_bloomberg], I have added you as a contributor and assigned this 
JIRA. 

> Test existence of SYSTEM:CATALOG before attempting to create it
> ---
>
> Key: PHOENIX-4681
> URL: https://issues.apache.org/jira/browse/PHOENIX-4681
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Salvatore LaMendola
>Assignee: Salvatore LaMendola
>Priority: Minor
>
> Getting WARN stacktraces similar to the one below when starting SQLLine after 
> enabling Phoenix Namespace support. After speaking with [~elserj], it became 
> apparent this is a bug that may have already been fixed. However, [~smayani] 
> and I were unable to find an existing JIRA for this issue. I propose 
> performing a test for the existence of {{SYSTEM:CATALOG}} before attempting 
> to create it, so that this stacktrace isn't printed at each startup (until 
> someone finally caves and applies {{CREATE}} permissions on the entire 
> {{@SYSTEM}} namespace). If I can find the time, I'd like to attempt to create 
> a patch, but I'd like to get community input first on the desired means to 
> fix this.
> {code:java}
> 18/03/29 19:29:21 WARN ipc.CoprocessorRpcChannel: Call failed on IOException
> org.apache.hadoop.hbase.security.AccessDeniedException: 
> org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions (user=use...@example.com, scope=SYSTEM, 
> params=[namespace=SYSTEM,table=SYSTEM:CATALOG],action=CREATE)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.requireNamespacePermission(AccessController.java:628)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preCreateTable(AccessController.java:996)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.preCreateTable(PhoenixAccessController.java:152)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$2.call(PhoenixMetaDataCoprocessorHost.java:167)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.execOperation(PhoenixMetaDataCoprocessorHost.java:80)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.preCreateTable(PhoenixMetaDataCoprocessorHost.java:163)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1375)
>   at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:14332)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7853)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1980)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1962)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:335)
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1625)
>   at 
> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:100)
>   at 
> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:90)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
>   at 
> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:104)
>   at 
> org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:56)
>   at 
> 

[jira] [Assigned] (PHOENIX-4681) Test existence of SYSTEM:CATALOG before attempting to create it

2018-05-24 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reassigned PHOENIX-4681:
--

Assignee: Salvatore LaMendola

> Test existence of SYSTEM:CATALOG before attempting to create it
> ---
>
> Key: PHOENIX-4681
> URL: https://issues.apache.org/jira/browse/PHOENIX-4681
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Salvatore LaMendola
>Assignee: Salvatore LaMendola
>Priority: Minor
>
> Getting WARN stacktraces similar to the one below when starting SQLLine after 
> enabling Phoenix Namespace support. After speaking with [~elserj], it became 
> apparent this is a bug that may have already been fixed. However, [~smayani] 
> and I were unable to find an existing JIRA for this issue. I propose 
> performing a test for the existence of {{SYSTEM:CATALOG}} before attempting 
> to create it, so that this stacktrace isn't printed at each startup (until 
> someone finally caves and applies {{CREATE}} permissions on the entire 
> {{@SYSTEM}} namespace). If I can find the time, I'd like to attempt to create 
> a patch, but I'd like to get community input first on the desired means to 
> fix this.
> {code:java}
> 18/03/29 19:29:21 WARN ipc.CoprocessorRpcChannel: Call failed on IOException
> org.apache.hadoop.hbase.security.AccessDeniedException: 
> org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions (user=use...@example.com, scope=SYSTEM, 
> params=[namespace=SYSTEM,table=SYSTEM:CATALOG],action=CREATE)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.requireNamespacePermission(AccessController.java:628)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preCreateTable(AccessController.java:996)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.preCreateTable(PhoenixAccessController.java:152)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$2.call(PhoenixMetaDataCoprocessorHost.java:167)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.execOperation(PhoenixMetaDataCoprocessorHost.java:80)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.preCreateTable(PhoenixMetaDataCoprocessorHost.java:163)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1375)
>   at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:14332)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7853)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1980)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1962)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:335)
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1625)
>   at 
> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:100)
>   at 
> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:90)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
>   at 
> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:104)
>   at 
> org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:56)
>   at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService$Stub.createTable(MetaDataProtos.java:14549)
>   at 
> 

[jira] [Commented] (PHOENIX-4681) Test existence of SYSTEM:CATALOG before attempting to create it

2018-05-24 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490004#comment-16490004
 ] 

Ankit Singhal commented on PHOENIX-4681:


[~slamendola2_bloomberg], Oh I see you just see the WARN message but the 
connection is getting established properly. 
Yeah to avoid unnecessary noise, I think it should be ok to check whether 
SYSTEM.CATALOG or SYSTEM:CATALOG exists before creating it if upgrade is is not 
enabled(isDoNotUpgradePropSet) and a user has RX permissions on the table.

> Test existence of SYSTEM:CATALOG before attempting to create it
> ---
>
> Key: PHOENIX-4681
> URL: https://issues.apache.org/jira/browse/PHOENIX-4681
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Salvatore LaMendola
>Priority: Minor
>
> Getting WARN stacktraces similar to the one below when starting SQLLine after 
> enabling Phoenix Namespace support. After speaking with [~elserj], it became 
> apparent this is a bug that may have already been fixed. However, [~smayani] 
> and I were unable to find an existing JIRA for this issue. I propose 
> performing a test for the existence of {{SYSTEM:CATALOG}} before attempting 
> to create it, so that this stacktrace isn't printed at each startup (until 
> someone finally caves and applies {{CREATE}} permissions on the entire 
> {{@SYSTEM}} namespace). If I can find the time, I'd like to attempt to create 
> a patch, but I'd like to get community input first on the desired means to 
> fix this.
> {code:java}
> 18/03/29 19:29:21 WARN ipc.CoprocessorRpcChannel: Call failed on IOException
> org.apache.hadoop.hbase.security.AccessDeniedException: 
> org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions (user=use...@example.com, scope=SYSTEM, 
> params=[namespace=SYSTEM,table=SYSTEM:CATALOG],action=CREATE)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.requireNamespacePermission(AccessController.java:628)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preCreateTable(AccessController.java:996)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.preCreateTable(PhoenixAccessController.java:152)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$2.call(PhoenixMetaDataCoprocessorHost.java:167)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.execOperation(PhoenixMetaDataCoprocessorHost.java:80)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.preCreateTable(PhoenixMetaDataCoprocessorHost.java:163)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1375)
>   at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:14332)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7853)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1980)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1962)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:335)
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1625)
>   at 
> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:100)
>   at 
> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:90)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
>   at 
> 

[jira] [Commented] (PHOENIX-1567) Publish Phoenix-Client & Phoenix-Server jars into Maven Repo

2018-05-24 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489920#comment-16489920
 ] 

Josh Elser commented on PHOENIX-1567:
-

{quote}should we start publishing phoenix-client artifact in maven from next 
release
{quote}
+1

> Publish Phoenix-Client & Phoenix-Server jars into Maven Repo
> 
>
> Key: PHOENIX-1567
> URL: https://issues.apache.org/jira/browse/PHOENIX-1567
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Jeffrey Zhong
>Priority: Major
>
> Phoenix doesn't publish Phoenix Client & Server jars into Maven repository. 
> This make things quite hard for down steam projects/applications to use maven 
> to resolve dependencies.
> I tried to modify the pom.xml under phoenix-assembly while it shows the 
> following. 
> {noformat}
> [INFO] Installing 
> /Users/jzhong/work/phoenix_apache/checkins/phoenix/phoenix-assembly/target/phoenix-4.3.0-SNAPSHOT-client.jar
>  
> to 
> /Users/jzhong/.m2/repository/org/apache/phoenix/phoenix-assembly/4.3.0-SNAPSHOT/phoenix-assembly-4.3.0-SNAPSHOT-client.jar
> {noformat}
> Basically the jar published to maven repo will become  
> phoenix-assembly-4.3.0-SNAPSHOT-client.jar or 
> phoenix-assembly-4.3.0-SNAPSHOT-server.jar
> The artifact id "phoenix-assembly" has to be the prefix of the names of jars.
> Therefore, the possible solutions are:
> 1) rename current client & server jar to phoenix-assembly-clinet/server.jar 
> to match the jars published to maven repo.
> 2) rename phoenix-assembly to something more meaningful and rename our client 
> & server jars accordingly
> 3) split phoenix-assembly and move the corresponding artifacts into 
> phoenix-client & phoenix-server folders. Phoenix-assembly will only create 
> tar ball files.
> [~giacomotaylor], [~apurtell] or other maven experts: Any suggestion on this? 
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-1567) Publish Phoenix-Client & Phoenix-Server jars into Maven Repo

2018-05-24 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489876#comment-16489876
 ] 

Ankit Singhal commented on PHOENIX-1567:


bq. we should revisit and start publishing our shaded artifacts as well. Let me 
know if it's fine
Guys , if we don't have any objection, should we start publishing 
phoenix-client artifact in maven from next release? (Not sure if we need to 
have phoenix-server in maven.)

> Publish Phoenix-Client & Phoenix-Server jars into Maven Repo
> 
>
> Key: PHOENIX-1567
> URL: https://issues.apache.org/jira/browse/PHOENIX-1567
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Jeffrey Zhong
>Priority: Major
>
> Phoenix doesn't publish Phoenix Client & Server jars into Maven repository. 
> This make things quite hard for down steam projects/applications to use maven 
> to resolve dependencies.
> I tried to modify the pom.xml under phoenix-assembly while it shows the 
> following. 
> {noformat}
> [INFO] Installing 
> /Users/jzhong/work/phoenix_apache/checkins/phoenix/phoenix-assembly/target/phoenix-4.3.0-SNAPSHOT-client.jar
>  
> to 
> /Users/jzhong/.m2/repository/org/apache/phoenix/phoenix-assembly/4.3.0-SNAPSHOT/phoenix-assembly-4.3.0-SNAPSHOT-client.jar
> {noformat}
> Basically the jar published to maven repo will become  
> phoenix-assembly-4.3.0-SNAPSHOT-client.jar or 
> phoenix-assembly-4.3.0-SNAPSHOT-server.jar
> The artifact id "phoenix-assembly" has to be the prefix of the names of jars.
> Therefore, the possible solutions are:
> 1) rename current client & server jar to phoenix-assembly-clinet/server.jar 
> to match the jars published to maven repo.
> 2) rename phoenix-assembly to something more meaningful and rename our client 
> & server jars accordingly
> 3) split phoenix-assembly and move the corresponding artifacts into 
> phoenix-client & phoenix-server folders. Phoenix-assembly will only create 
> tar ball files.
> [~giacomotaylor], [~apurtell] or other maven experts: Any suggestion on this? 
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4681) Test existence of SYSTEM:CATALOG before attempting to create it

2018-05-24 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489867#comment-16489867
 ] 

Ankit Singhal edited comment on PHOENIX-4681 at 5/24/18 9:52 PM:
-

Is this code in ConnectionQueryServicesImpl not handling AccessDeniedException 
already? 

{code}
catch (PhoenixIOException e) {
boolean foundAccessDeniedException = false;
// when running spark/map reduce jobs the 
ADE might be wrapped
// in a RemoteException
for (Throwable t : 
Throwables.getCausalChain(e)) {
if (t instanceof AccessDeniedException
|| (t instanceof RemoteException
&& ((RemoteException) 
t).getClassName()

.equals(AccessDeniedException.class

.getName( {
foundAccessDeniedException = true;
break;
}
}
if (foundAccessDeniedException) {
// Pass
logger.warn("Could not check for 
Phoenix SYSTEM tables, assuming they exist and are properly configured");

checkClientServerCompatibility(SchemaUtil.getPhysicalName(SYSTEM_CATALOG_NAME_BYTES,
 getProps()).getName());
success = true;
} else if 
(!Iterables.isEmpty(Iterables.filter(Throwables.getCausalChain(e), 
NamespaceNotFoundException.class))) {
// This exception is only possible if 
SYSTEM namespace mapping is enabled and SYSTEM namespace is missing
// It implies that SYSTEM tables are 
not created and hence we shouldn't provide a connection
AccessDeniedException ade = new 
AccessDeniedException("Insufficient permissions to create SYSTEM namespace and 
SYSTEM Tables");
initializationException = 
ServerUtil.parseServerException(ade);
} else {
initializationException = e;
}
return null;
}
{code}

bq. If I can find the time, I'd like to attempt to create a patch, but I'd like 
to get community input first on the desired means to fix this.
Thanks [~slamendola2_bloomberg] for showing interest, I think test-case 
reproducing the issue would be a good start. (can be written in 
SystemTablePermissionsIT)


was (Author: an...@apache.org):
Is this code in ConnectionQueryServicesImpl not handling AccessDeniedException 
already? 

{code}
catch (PhoenixIOException e) {
boolean foundAccessDeniedException = false;
// when running spark/map reduce jobs the 
ADE might be wrapped
// in a RemoteException
for (Throwable t : 
Throwables.getCausalChain(e)) {
if (t instanceof AccessDeniedException
|| (t instanceof RemoteException
&& ((RemoteException) 
t).getClassName()

.equals(AccessDeniedException.class

.getName( {
foundAccessDeniedException = true;
break;
}
}
if (foundAccessDeniedException) {
// Pass
logger.warn("Could not check for 
Phoenix SYSTEM tables, assuming they exist and are properly configured");

checkClientServerCompatibility(SchemaUtil.getPhysicalName(SYSTEM_CATALOG_NAME_BYTES,
 getProps()).getName());
success = true;
} else if 
(!Iterables.isEmpty(Iterables.filter(Throwables.getCausalChain(e), 
NamespaceNotFoundException.class))) {
// This exception is only possible if 
SYSTEM 

[jira] [Commented] (PHOENIX-4681) Test existence of SYSTEM:CATALOG before attempting to create it

2018-05-24 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489867#comment-16489867
 ] 

Ankit Singhal commented on PHOENIX-4681:


Is this code in ConnectionQueryServicesImpl not handling AccessDeniedException 
already? 

{code}
catch (PhoenixIOException e) {
boolean foundAccessDeniedException = false;
// when running spark/map reduce jobs the 
ADE might be wrapped
// in a RemoteException
for (Throwable t : 
Throwables.getCausalChain(e)) {
if (t instanceof AccessDeniedException
|| (t instanceof RemoteException
&& ((RemoteException) 
t).getClassName()

.equals(AccessDeniedException.class

.getName( {
foundAccessDeniedException = true;
break;
}
}
if (foundAccessDeniedException) {
// Pass
logger.warn("Could not check for 
Phoenix SYSTEM tables, assuming they exist and are properly configured");

checkClientServerCompatibility(SchemaUtil.getPhysicalName(SYSTEM_CATALOG_NAME_BYTES,
 getProps()).getName());
success = true;
} else if 
(!Iterables.isEmpty(Iterables.filter(Throwables.getCausalChain(e), 
NamespaceNotFoundException.class))) {
// This exception is only possible if 
SYSTEM namespace mapping is enabled and SYSTEM namespace is missing
// It implies that SYSTEM tables are 
not created and hence we shouldn't provide a connection
AccessDeniedException ade = new 
AccessDeniedException("Insufficient permissions to create SYSTEM namespace and 
SYSTEM Tables");
initializationException = 
ServerUtil.parseServerException(ade);
} else {
initializationException = e;
}
return null;
}
{code}

bq. If I can find the time, I'd like to attempt to create a patch, but I'd like 
to get community input first on the desired means to fix this.
Thanks [~slamendola2_bloomberg] for showing interest, I think test-case 
reproducing the issue would be a good start.

> Test existence of SYSTEM:CATALOG before attempting to create it
> ---
>
> Key: PHOENIX-4681
> URL: https://issues.apache.org/jira/browse/PHOENIX-4681
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Salvatore LaMendola
>Priority: Minor
>
> Getting WARN stacktraces similar to the one below when starting SQLLine after 
> enabling Phoenix Namespace support. After speaking with [~elserj], it became 
> apparent this is a bug that may have already been fixed. However, [~smayani] 
> and I were unable to find an existing JIRA for this issue. I propose 
> performing a test for the existence of {{SYSTEM:CATALOG}} before attempting 
> to create it, so that this stacktrace isn't printed at each startup (until 
> someone finally caves and applies {{CREATE}} permissions on the entire 
> {{@SYSTEM}} namespace). If I can find the time, I'd like to attempt to create 
> a patch, but I'd like to get community input first on the desired means to 
> fix this.
> {code:java}
> 18/03/29 19:29:21 WARN ipc.CoprocessorRpcChannel: Call failed on IOException
> org.apache.hadoop.hbase.security.AccessDeniedException: 
> org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions (user=use...@example.com, scope=SYSTEM, 
> params=[namespace=SYSTEM,table=SYSTEM:CATALOG],action=CREATE)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.requireNamespacePermission(AccessController.java:628)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preCreateTable(AccessController.java:996)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.preCreateTable(PhoenixAccessController.java:152)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$2.call(PhoenixMetaDataCoprocessorHost.java:167)
>   at 

[jira] [Resolved] (PHOENIX-4692) ArrayIndexOutOfBoundsException in ScanRanges.intersectScan

2018-05-24 Thread Maryann Xue (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maryann Xue resolved PHOENIX-4692.
--
Resolution: Fixed

> ArrayIndexOutOfBoundsException in ScanRanges.intersectScan
> --
>
> Key: PHOENIX-4692
> URL: https://issues.apache.org/jira/browse/PHOENIX-4692
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4692-IT.patch, PHOENIX-4692_v1.patch, 
> PHOENIX-4692_v2.patch
>
>
> ScanRanges.intersectScan may fail with AIOOBE if a salted table is used.
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at org.apache.phoenix.util.ScanUtil.getKey(ScanUtil.java:333)
>   at org.apache.phoenix.util.ScanUtil.getMinKey(ScanUtil.java:317)
>   at 
> org.apache.phoenix.compile.ScanRanges.intersectScan(ScanRanges.java:371)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:1074)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:631)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:501)
>   at 
> org.apache.phoenix.iterate.ParallelIterators.(ParallelIterators.java:62)
>   at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:274)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:364)
>   at 
> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:234)
>   at 
> org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:144)
>   at 
> org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:139)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:314)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:293)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:292)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:285)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1798)
> {noformat}
> Script to reproduce:
> {noformat}
> CREATE TABLE TEST (PK1 INTEGER NOT NULL, PK2 INTEGER NOT NULL,  ID1 INTEGER, 
> ID2 INTEGER CONSTRAINT PK PRIMARY KEY(PK1 , PK2))SALT_BUCKETS = 4;
> upsert into test values (1,1,1,1);
> upsert into test values (2,2,2,2);
> upsert into test values (2,3,1,2);
> create view TEST_VIEW as select * from TEST where PK1 in (1,2);
> CREATE INDEX IDX_VIEW ON TEST_VIEW (ID1);
>   select /*+ INDEX(TEST_VIEW IDX_VIEW) */ * from TEST_VIEW where ID1 = 1  
> ORDER BY ID2 LIMIT 500 OFFSET 0;
> {noformat}
> That happens because we have a point lookup optimization which reduces 
> RowKeySchema to a single field, while we have more than one slot due salting. 
> [~jamestaylor] can you please take a look? I'm not sure whether it should be 
> fixed on the ScanUtil level or we just should not use point lookup in such 
> cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4402) Migrate Tephra to 0.14.0-incubating-SNAPSHOT

2018-05-24 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-4402.

Resolution: Duplicate

> Migrate Tephra to 0.14.0-incubating-SNAPSHOT
> 
>
> Key: PHOENIX-4402
> URL: https://issues.apache.org/jira/browse/PHOENIX-4402
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Ankit Singhal
>Priority: Major
>
> As part of TEPHRA-236 Table interface used instead of HTableInterface so the 
> things should work with  0.14.0-incubating-SNAPSHOT so we can move as of now 
> to the version and later change it to released version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4402) Migrate Tephra to 0.14.0-incubating-SNAPSHOT

2018-05-24 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489854#comment-16489854
 ] 

Ankit Singhal commented on PHOENIX-4402:


PHOENIX-4580 is the duplicate of it, as we are making progress there so closing 
this.

> Migrate Tephra to 0.14.0-incubating-SNAPSHOT
> 
>
> Key: PHOENIX-4402
> URL: https://issues.apache.org/jira/browse/PHOENIX-4402
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Ankit Singhal
>Priority: Major
>
> As part of TEPHRA-236 Table interface used instead of HTableInterface so the 
> things should work with  0.14.0-incubating-SNAPSHOT so we can move as of now 
> to the version and later change it to released version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-2896) Support encoded column qualifiers per column family

2018-05-24 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489685#comment-16489685
 ] 

Thomas D'Silva commented on PHOENIX-2896:
-

[~samarthjain]

Nevermind, I should have read your comment more closely. I get it now, Thanks!

> Support encoded column qualifiers per column family 
> 
>
> Key: PHOENIX-2896
> URL: https://issues.apache.org/jira/browse/PHOENIX-2896
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Thomas D'Silva
>Assignee: Samarth Jain
>Priority: Major
> Fix For: 4.10.0
>
>
> This allows us to reduce the number of null values in the stored array that 
> contains all columns for a give column family for the 
> COLUMNS_STORED_IN_SINGLE_CELL Storage Scheme.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4540) Client side evaluation of group by Expression in projection gives erroneous result

2018-05-24 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489508#comment-16489508
 ] 

Ankit Singhal commented on PHOENIX-4540:


ping [~jamestaylor]

> Client side evaluation of group by Expression in projection gives erroneous 
> result
> --
>
> Key: PHOENIX-4540
> URL: https://issues.apache.org/jira/browse/PHOENIX-4540
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-4540.patch, PHOENIX-4540_unittest.patch
>
>
> If the columns involved in projected expression are not present in "group by" 
> clause, the client evaluation of the same expression will give an erroneous 
> result because of the absence of involved column value.
> Following queries will produce wrong result
> >select round(k/v,0) x from round_test group by x,v 
> >select k/v x from round_test group by x,v 
> but query runs fine if we add all columns so that client expression can be 
> evaluated
> >select round(k/v,0) x from round_test group by x,k,v //will produce right 
> >result
> >select k/v x from round_test group by x,k,v; 
> Why we need to re-evaluate the expression here, can't we use the same result 
> evaluated at server side during the "group by" 
> thoughts [~jamestaylor]?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4749) Support impersonation without SPNEGO authn via PQS with Kerberized HBase

2018-05-24 Thread Alex Araujo (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489499#comment-16489499
 ] 

Alex Araujo commented on PHOENIX-4749:
--

The patch, description and PR have been updated.

> Support impersonation without SPNEGO authn via PQS with Kerberized HBase
> 
>
> Key: PHOENIX-4749
> URL: https://issues.apache.org/jira/browse/PHOENIX-4749
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4749.patch
>
>
> Phoenix Query Server only supports SPNEGO auth (Kerberos) with impersonation.
> Allow other authentication methods to be used with impersonation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4749) Support impersonation without SPNEGO authn via PQS with Kerberized HBase

2018-05-24 Thread Alex Araujo (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Araujo updated PHOENIX-4749:
-
Attachment: (was: PHOENIX-4749.patch)

> Support impersonation without SPNEGO authn via PQS with Kerberized HBase
> 
>
> Key: PHOENIX-4749
> URL: https://issues.apache.org/jira/browse/PHOENIX-4749
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4749.patch
>
>
> Phoenix Query Server only supports SPNEGO auth (Kerberos) with impersonation.
> Allow other authentication methods to be used with impersonation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4749) Support impersonation without SPNEGO authn via PQS with Kerberized HBase

2018-05-24 Thread Alex Araujo (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Araujo updated PHOENIX-4749:
-
Attachment: PHOENIX-4749.patch

> Support impersonation without SPNEGO authn via PQS with Kerberized HBase
> 
>
> Key: PHOENIX-4749
> URL: https://issues.apache.org/jira/browse/PHOENIX-4749
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4749.patch
>
>
> Phoenix Query Server only supports SPNEGO auth (Kerberos) with impersonation.
> Allow other authentication methods to be used with impersonation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4749) Support impersonation without SPNEGO authn via PQS with Kerberized HBase

2018-05-24 Thread Alex Araujo (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Araujo updated PHOENIX-4749:
-
Description: 
Phoenix Query Server only supports SPNEGO auth (Kerberos) with impersonation.

Allow other authentication methods to be used with impersonation.

  was:
Phoenix Query Server forces SPNEGO auth (Kerberos) for clients when Kerberos 
auth is enabled for HBase.

Client authentication should be decoupled from HBase authentication. This would 
allow for other client authentication mechanisms to be plugged in when Kerberos 
is used for HBase.


> Support impersonation without SPNEGO authn via PQS with Kerberized HBase
> 
>
> Key: PHOENIX-4749
> URL: https://issues.apache.org/jira/browse/PHOENIX-4749
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4749.patch
>
>
> Phoenix Query Server only supports SPNEGO auth (Kerberos) with impersonation.
> Allow other authentication methods to be used with impersonation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4723) Fix UpsertSelectOverlappingBatchesIT#testSplitDuringUpsertSelect test

2018-05-24 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4723:
---
Affects Version/s: 5.0.0

> Fix UpsertSelectOverlappingBatchesIT#testSplitDuringUpsertSelect test
> -
>
> Key: PHOENIX-4723
> URL: https://issues.apache.org/jira/browse/PHOENIX-4723
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4723.patch
>
>
> {code}
> Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException:
> org.apache.hadoop.hbase.client.DoNotRetryRegionException: 
> 7f0323048829d4823010044f16b7a191 is not OPEN
>   at 
> org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.checkOnline(AbstractStateMachineTableProcedure.java:193)
>   at 
> org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.(SplitTableRegionProcedure.java:112)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.createSplitProcedure(AssignmentManager.java:773)
>   at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1622)
>   at 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:131)
>   at org.apache.hadoop.hbase.master.HMaster.splitRegion(HMaster.java:1614)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.splitRegion(MasterRpcServices.java:775)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4723) Fix UpsertSelectOverlappingBatchesIT#testSplitDuringUpsertSelect test

2018-05-24 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-4723.

   Resolution: Fixed
Fix Version/s: (was: 4.15.0)
   5.0.0

> Fix UpsertSelectOverlappingBatchesIT#testSplitDuringUpsertSelect test
> -
>
> Key: PHOENIX-4723
> URL: https://issues.apache.org/jira/browse/PHOENIX-4723
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4723.patch
>
>
> {code}
> Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException:
> org.apache.hadoop.hbase.client.DoNotRetryRegionException: 
> 7f0323048829d4823010044f16b7a191 is not OPEN
>   at 
> org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.checkOnline(AbstractStateMachineTableProcedure.java:193)
>   at 
> org.apache.hadoop.hbase.master.assignment.SplitTableRegionProcedure.(SplitTableRegionProcedure.java:112)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.createSplitProcedure(AssignmentManager.java:773)
>   at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1622)
>   at 
> org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:131)
>   at org.apache.hadoop.hbase.master.HMaster.splitRegion(HMaster.java:1614)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.splitRegion(MasterRpcServices.java:775)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4749) Support impersonation without SPNEGO authn via PQS with Kerberized HBase

2018-05-24 Thread Alex Araujo (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489406#comment-16489406
 ] 

Alex Araujo commented on PHOENIX-4749:
--

Good call. Will update the description in the patch shortly.

> Support impersonation without SPNEGO authn via PQS with Kerberized HBase
> 
>
> Key: PHOENIX-4749
> URL: https://issues.apache.org/jira/browse/PHOENIX-4749
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4749.patch
>
>
> Phoenix Query Server forces SPNEGO auth (Kerberos) for clients when Kerberos 
> auth is enabled for HBase.
> Client authentication should be decoupled from HBase authentication. This 
> would allow for other client authentication mechanisms to be plugged in when 
> Kerberos is used for HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-1567) Publish Phoenix-Client & Phoenix-Server jars into Maven Repo

2018-05-24 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489361#comment-16489361
 ] 

Josh Elser commented on PHOENIX-1567:
-

[~tony-kerz], phoenix-core was (unnecessarily) an uber-jar in all Phoenix 
releases up until PHOENIX-4706. It is not, nor has been in any recent number of 
years, as large as what the phoenix-client jar is.

> Publish Phoenix-Client & Phoenix-Server jars into Maven Repo
> 
>
> Key: PHOENIX-1567
> URL: https://issues.apache.org/jira/browse/PHOENIX-1567
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Jeffrey Zhong
>Priority: Major
>
> Phoenix doesn't publish Phoenix Client & Server jars into Maven repository. 
> This make things quite hard for down steam projects/applications to use maven 
> to resolve dependencies.
> I tried to modify the pom.xml under phoenix-assembly while it shows the 
> following. 
> {noformat}
> [INFO] Installing 
> /Users/jzhong/work/phoenix_apache/checkins/phoenix/phoenix-assembly/target/phoenix-4.3.0-SNAPSHOT-client.jar
>  
> to 
> /Users/jzhong/.m2/repository/org/apache/phoenix/phoenix-assembly/4.3.0-SNAPSHOT/phoenix-assembly-4.3.0-SNAPSHOT-client.jar
> {noformat}
> Basically the jar published to maven repo will become  
> phoenix-assembly-4.3.0-SNAPSHOT-client.jar or 
> phoenix-assembly-4.3.0-SNAPSHOT-server.jar
> The artifact id "phoenix-assembly" has to be the prefix of the names of jars.
> Therefore, the possible solutions are:
> 1) rename current client & server jar to phoenix-assembly-clinet/server.jar 
> to match the jars published to maven repo.
> 2) rename phoenix-assembly to something more meaningful and rename our client 
> & server jars accordingly
> 3) split phoenix-assembly and move the corresponding artifacts into 
> phoenix-client & phoenix-server folders. Phoenix-assembly will only create 
> tar ball files.
> [~giacomotaylor], [~apurtell] or other maven experts: Any suggestion on this? 
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4749) Support impersonation without SPNEGO authn via PQS with Kerberized HBase

2018-05-24 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4749:

Summary: Support impersonation without SPNEGO authn via PQS with Kerberized 
HBase  (was: Allow SPNEGO to be disabled for client auth when using Kerberos 
with HBase)

> Support impersonation without SPNEGO authn via PQS with Kerberized HBase
> 
>
> Key: PHOENIX-4749
> URL: https://issues.apache.org/jira/browse/PHOENIX-4749
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4749.patch
>
>
> Phoenix Query Server forces SPNEGO auth (Kerberos) for clients when Kerberos 
> auth is enabled for HBase.
> Client authentication should be decoupled from HBase authentication. This 
> would allow for other client authentication mechanisms to be plugged in when 
> Kerberos is used for HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4749) Allow SPNEGO to be disabled for client auth when using Kerberos with HBase

2018-05-24 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489325#comment-16489325
 ] 

Josh Elser commented on PHOENIX-4749:
-

Got it, thanks for the description. Let me update the title.

> Allow SPNEGO to be disabled for client auth when using Kerberos with HBase
> --
>
> Key: PHOENIX-4749
> URL: https://issues.apache.org/jira/browse/PHOENIX-4749
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4749.patch
>
>
> Phoenix Query Server forces SPNEGO auth (Kerberos) for clients when Kerberos 
> auth is enabled for HBase.
> Client authentication should be decoupled from HBase authentication. This 
> would allow for other client authentication mechanisms to be plugged in when 
> Kerberos is used for HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4749) Allow SPNEGO to be disabled for client auth when using Kerberos with HBase

2018-05-24 Thread Alex Araujo (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489312#comment-16489312
 ] 

Alex Araujo commented on PHOENIX-4749:
--

[~elserj] this patch allows impersonation to be used when SPNEGO is disabled 
for client requests, and Kerberos is used for server requests. Basically 
addresses this point in PHOENIX-3686:

> allowing *anyone* into your HBase instance as the PQS Kerberos user

> Allow SPNEGO to be disabled for client auth when using Kerberos with HBase
> --
>
> Key: PHOENIX-4749
> URL: https://issues.apache.org/jira/browse/PHOENIX-4749
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4749.patch
>
>
> Phoenix Query Server forces SPNEGO auth (Kerberos) for clients when Kerberos 
> auth is enabled for HBase.
> Client authentication should be decoupled from HBase authentication. This 
> would allow for other client authentication mechanisms to be plugged in when 
> Kerberos is used for HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4749) Allow SPNEGO to be disabled for client auth when using Kerberos with HBase

2018-05-24 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489178#comment-16489178
 ] 

Josh Elser commented on PHOENIX-4749:
-

[~alexaraujo], what are you doing differently than was already done in 
PHOENIX-3686?

> Allow SPNEGO to be disabled for client auth when using Kerberos with HBase
> --
>
> Key: PHOENIX-4749
> URL: https://issues.apache.org/jira/browse/PHOENIX-4749
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4749.patch
>
>
> Phoenix Query Server forces SPNEGO auth (Kerberos) for clients when Kerberos 
> auth is enabled for HBase.
> Client authentication should be decoupled from HBase authentication. This 
> would allow for other client authentication mechanisms to be plugged in when 
> Kerberos is used for HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-2896) Support encoded column qualifiers per column family

2018-05-24 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16489134#comment-16489134
 ] 

Thomas D'Silva commented on PHOENIX-2896:
-

[~samarthjain] from an eariler comment on this Jira you said that we need to 
use a single counter so that we can guarantee that column qualifiers are unique 
across all the column families for mutable tables (which is required by 
EncodedColumnQualifierCellsList where each column qualifier maps to an index in 
the array). But looking at the code it looks like we have a counter per column 
family.  So we could have a two columns from different column families that 
have the same column qualifier.

> Support encoded column qualifiers per column family 
> 
>
> Key: PHOENIX-2896
> URL: https://issues.apache.org/jira/browse/PHOENIX-2896
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Thomas D'Silva
>Assignee: Samarth Jain
>Priority: Major
> Fix For: 4.10.0
>
>
> This allows us to reduce the number of null values in the stored array that 
> contains all columns for a give column family for the 
> COLUMNS_STORED_IN_SINGLE_CELL Storage Scheme.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4753) Remove the need for users to have Write access to the Phoenix SYSTEM STATS TABLE to drop tables

2018-05-24 Thread Saumil Mayani (JIRA)
Saumil Mayani created PHOENIX-4753:
--

 Summary: Remove the need for users to have Write access to the 
Phoenix SYSTEM STATS TABLE to drop tables
 Key: PHOENIX-4753
 URL: https://issues.apache.org/jira/browse/PHOENIX-4753
 Project: Phoenix
  Issue Type: Bug
Reporter: Saumil Mayani


Problem statement:-
With [PHOENIX-4198|https://issues.apache.org/jira/browse/PHOENIX-4198] a user 
only needs RX permissions for SYSTEM CATALOG Table, however, it required to 
have a write permission to SYSTEM STATS Table when performing drop operation on 
a table. This is a security concern as they can create/alter/drop/corrupt STATS 
data of any other table without proper access to the corresponding physical 
tables.

STEPS TO REPRODUCE:

1. Set the following properties in hbase-site.xml:

 
{code:java}
# File: hbase-site.xml
 
# Properties=value
hbase.security.authorization=true
hbase.coprocessor.master.classes=org.apache.hadoop.hbase.security.access.AccessController
hbase.coprocessor.region.classes=org.apache.hadoop.hbase.security.access.AccessController,
org.apache.hadoop.hbase.security.token.TokenProvider,
org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
hbase.coprocessor.regionserver.classes=org.apache.hadoop.hbase.security.access.AccessController
phoenix.acls.enabled=true
phoenix.schema.isNamespaceMappingEnabled=true
phoenix.schema.mapSystemTablesToNamespace=true
{code}
 

2.  Grant READ permission on SYSTEM Namespace and RWXCA on the user Namespace, 
to the user:

 
{code:java}
# Example: user01t01 belong to tenant01
 
# Grant a user read permission to "SYSTEM" Namespace
> grant 'user01t01', 'RX' , '@SYSTEM'
 
# Grant respective 'RWXCA' [READ('R'), WRITE('W'), EXEC('X'),
CREATE('C'), ADMIN('A')] permissions on user namespace
> grant 'user01t01', 'RWXCA' , '@TENANT01'
{code}
 

3. Login as 'user01t01' and perform the operations. to create table, add data , 
update statistics and drop table.

 
{code:java}
# Login as the user 'user01t01'
kinit user01t01

# create table under namespace / schema tenant01
create table tenant01.test (mykey integer not null primary key, mycolumn 
varchar);

# Insert some data
upsert into tenant01.test values (1,'Hello');
upsert into tenant01.test values (2,'World!');

# select / read back the data inserted.
select * from tenant01.test;

# check if the STATS table has information for "tenant01.test"
select * from SYSTEM.STATS where PHYSICAL_NAME='TENANT01:TEST';

# If no record in SYSTEM.STATS, update stats.
update statistics tenant01.test;

# Drop the table
drop table tenant01.test;
{code}
 

 

Following Error gets reported, although the Table is dropped from 
SYSTEM:CATALOG Table, but the record exist in SYSTEM:STATS Table.

 
{code:java}
Error: org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
permissions (user=user01...@example.com, scope=SYSTEM:STATS, family=0:, 
params=[table=SYSTEM:STATS,family=0:],action=WRITE)
at 
org.apache.hadoop.hbase.security.access.AccessController.preDelete(AccessController.java:1701)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$33.call(RegionCoprocessorHost.java:941)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1660)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1734)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1692)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preDelete(RegionCoprocessorHost.java:937)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doPreMutationHook(HRegion.java:3055)
at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3019)
at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2965)
at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.commitBatch(UngroupedAggregateRegionObserver.java:225)
at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.commit(UngroupedAggregateRegionObserver.java:764)
at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:667)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:237)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1301)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1660)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1734)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1699)
at 

[jira] [Commented] (PHOENIX-3506) Phoenix-Spark plug in cannot select by column family name

2018-05-24 Thread Tang Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488972#comment-16488972
 ] 

Tang Yan commented on PHOENIX-3506:
---

Facing the same issue, any solution for it?

> Phoenix-Spark plug in cannot select by column family name
> -
>
> Key: PHOENIX-3506
> URL: https://issues.apache.org/jira/browse/PHOENIX-3506
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Xindian Long
>Priority: Major
>
> I have a table with multiple column family with possible same column names.
> I want to use phoenix-spark plug in to select some of the fields, but it 
> returns a AnalysisException (details in the attached file).
> It works with no column family, but I expect that I do not need to make sure 
> column names are unique  across different column families.
> I used the following code:
> 
> public void testSpark(JavaSparkContext sc, String tableStr, String 
> dataSrcUrl) {
> //SparkContextBuilder.buildSparkContext("Simple Application", "local");
> // One JVM can only have one Spark Context now
> Map options = new HashMap();
> SQLContext sqlContext = new SQLContext(sc);
> options.put("zkUrl", dataSrcUrl);
> options.put("table", tableStr);
> log.info("Phoenix DB URL: " + dataSrcUrl + " tableStr: " + tableStr);
> DataFrame df = null;
> try {
> df = 
> sqlContext.read().format("org.apache.phoenix.spark").options(options).load();
> df.explain(true);
> df.show();
> df = df.select("I.CI", "I.FA");
> //df = df.select("\"I\".\"CI\"", "\"I\".\"FA\""); // This gives the 
> same exception too
> } catch (Exception ex) {
> log.error("sql error: ", ex);
> }
> try {
> log.info("Count By phoenix spark plugin: " + df.count());
>} catch (Exception ex) {
> log.error("dataframe error: ", ex);
> }
> }
>  -
>  
> I can see in the log that there is something like
>  
> 10728 [INFO] main  org.apache.phoenix.mapreduce.PhoenixInputFormat  - Select 
> Statement: SELECT 
> "RID","I"."CI","I"."FA","I"."FPR","I"."FPT","I"."FR","I"."LAT","I"."LNG","I"."NCG","I"."NGPD","I"."VE","I"."VMJ","I"."VMR","I"."VP","I"."CSRE","I"."VIB","I"."IIICS","I"."LICSCD","I"."LEDC","I"."ARM","I"."FBM","I"."FTB","I"."NA2FR","I"."NA2PT","S"."AHDM","S"."ARTJ","S"."ATBM","S"."ATBMR","S"."ATBR","S"."ATBRR","S"."CS","S"."LAMT","S"."LTFCT","S"."LBMT","S"."LDTI","S"."LMT","S"."LMTN","S"."LMTR","S"."LPET","S"."LPORET","S"."LRMT","S"."LRMTP","S"."LRMTR","S"."LSRT","S"."LSST","S"."MHDMS0","S"."MHDMS1","S"."RFD","S"."RRN","S"."RRR","S"."TD","S"."TSM","S"."TC","S"."TPM","S"."LRMCT","S"."SS13FSK34","S"."LERMT","S"."LEMDMT","S"."AGTBRE","S"."SRM","S"."LTET","S"."TPMS","S"."TPMSM","S"."TM","S"."TMF","S"."TMFM","S"."NA2TLS","S"."NA2IT","S"."CWR","S"."BPR","S"."LR","S"."HLB","S"."NA2UFTBFR","S"."DT","S"."NA28ARE","S"."RM","S"."LMTB","S"."LRMTB","S"."RRB","P"."BADUC","P"."UAN","P"."BAPS","P"."BAS","P"."UAS","P"."BATBBR","P"."BBRI","P"."BLBR","P"."ULHT","P"."BLPST","P"."BLPT","P"."UTI","P"."UUC"
>  FROM TESTING.ENDPOINTS
>  
> But obviously, the column family is  left out of the Dataframe column name 
> somewhere in the process.
> Need a fix that can select by ColumnFamilyName.ColumnQualifier



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4749) Allow SPNEGO to be disabled for client auth when using Kerberos with HBase

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488929#comment-16488929
 ] 

ASF GitHub Bot commented on PHOENIX-4749:
-

GitHub user aaraujo opened a pull request:

https://github.com/apache/phoenix/pull/302

PHOENIX-4749 Allow SPNEGO to be disabled when using Kerberos

https://issues.apache.org/jira/browse/PHOENIX-4749

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/aaraujo/phoenix PHOENIX-4749

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/302.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #302


commit 3c6fde66974a5ef3ccbe285972f69b057b159eac
Author: Alex Araujo 
Date:   2018-05-23T15:28:48Z

PHOENIX-4749 Allow SPNEGO to be disabled when using Kerberos




> Allow SPNEGO to be disabled for client auth when using Kerberos with HBase
> --
>
> Key: PHOENIX-4749
> URL: https://issues.apache.org/jira/browse/PHOENIX-4749
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4749.patch
>
>
> Phoenix Query Server forces SPNEGO auth (Kerberos) for clients when Kerberos 
> auth is enabled for HBase.
> Client authentication should be decoupled from HBase authentication. This 
> would allow for other client authentication mechanisms to be plugged in when 
> Kerberos is used for HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] phoenix issue #302: PHOENIX-4749 Allow SPNEGO to be disabled when using Kerb...

2018-05-24 Thread aaraujo
Github user aaraujo commented on the issue:

https://github.com/apache/phoenix/pull/302
  
@joshelser created a PR in case it's easier to review


---


[jira] [Commented] (PHOENIX-4749) Allow SPNEGO to be disabled for client auth when using Kerberos with HBase

2018-05-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488932#comment-16488932
 ] 

ASF GitHub Bot commented on PHOENIX-4749:
-

Github user aaraujo commented on the issue:

https://github.com/apache/phoenix/pull/302
  
@joshelser created a PR in case it's easier to review


> Allow SPNEGO to be disabled for client auth when using Kerberos with HBase
> --
>
> Key: PHOENIX-4749
> URL: https://issues.apache.org/jira/browse/PHOENIX-4749
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4749.patch
>
>
> Phoenix Query Server forces SPNEGO auth (Kerberos) for clients when Kerberos 
> auth is enabled for HBase.
> Client authentication should be decoupled from HBase authentication. This 
> would allow for other client authentication mechanisms to be plugged in when 
> Kerberos is used for HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] phoenix pull request #302: PHOENIX-4749 Allow SPNEGO to be disabled when usi...

2018-05-24 Thread aaraujo
GitHub user aaraujo opened a pull request:

https://github.com/apache/phoenix/pull/302

PHOENIX-4749 Allow SPNEGO to be disabled when using Kerberos

https://issues.apache.org/jira/browse/PHOENIX-4749

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/aaraujo/phoenix PHOENIX-4749

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/302.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #302


commit 3c6fde66974a5ef3ccbe285972f69b057b159eac
Author: Alex Araujo 
Date:   2018-05-23T15:28:48Z

PHOENIX-4749 Allow SPNEGO to be disabled when using Kerberos




---


[jira] [Commented] (PHOENIX-4671) Fix minor size accounting bug for MutationSize

2018-05-24 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488799#comment-16488799
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-4671:
--

Pushed this to 5.x branch.

> Fix minor size accounting bug for MutationSize
> --
>
> Key: PHOENIX-4671
> URL: https://issues.apache.org/jira/browse/PHOENIX-4671
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Fix For: 4.14.0, 5.0.0
>
> Attachments: 4671-v2.txt, 4671.txt
>
>
> Just ran into a bug where UPSERT INTO table ... SELECT ... FROM table would 
> fail due to "Error: ERROR 730 (LIM02): MutationState size is bigger than 
> maximum allowed number of bytes (state=LIM02,code=730)" even with auto commit 
> on.
> Ran it through a debugger, just a simple accounting bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-2896) Support encoded column qualifiers per column family

2018-05-24 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488565#comment-16488565
 ] 

Samarth Jain commented on PHOENIX-2896:
---

[~tdsilva] - I am not sure I understood your question. We use the default 
family name for tracking column qualifier counters for mutable tables. 
{code:java}

if (immutableStorageScheme == SINGLE_CELL_ARRAY_WITH_OFFSETS && encodingScheme 
!= NON_ENCODED_QUALIFIERS) {
// For this scheme we track column qualifier counters at the column family 
level.
cqCounterFamily = colDefFamily != null ? colDefFamily : (defaultFamilyName != 
null ? defaultFamilyName : DEFAULT_COLUMN_FAMILY);
} else {
// For other schemes, column qualifier counters are tracked using the default 
column family.
cqCounterFamily = defaultFamilyName != null ? defaultFamilyName : 
DEFAULT_COLUMN_FAMILY;
}{code}

> Support encoded column qualifiers per column family 
> 
>
> Key: PHOENIX-2896
> URL: https://issues.apache.org/jira/browse/PHOENIX-2896
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Thomas D'Silva
>Assignee: Samarth Jain
>Priority: Major
> Fix For: 4.10.0
>
>
> This allows us to reduce the number of null values in the stored array that 
> contains all columns for a give column family for the 
> COLUMNS_STORED_IN_SINGLE_CELL Storage Scheme.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4610) Converge 4.x and 5.x branches

2018-05-24 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488554#comment-16488554
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-4610:
--

PHOENIX-4671 is missing in 5.x branch. Going to commit it.

> Converge 4.x and 5.x branches
> -
>
> Key: PHOENIX-4610
> URL: https://issues.apache.org/jira/browse/PHOENIX-4610
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.0.0
>
>
> We have a quite a few improvements which have landed on the 4.x branches 
> which have missed the 5.x branch due to its earlier instability. Rajeshbabu 
> volunteered offline to me to start this onerous task.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)