[jira] [Commented] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-01-24 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338576#comment-16338576
 ] 

James Taylor commented on PHOENIX-4553:
---

FYI, [~pboado]. Issue in our recent release?

> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Priority: Major
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. SLF4J: Actual binding is of type 
> [org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
> "RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
> java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
> Provider 
> org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo 
> not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
> org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333)
>  at 
> org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
>  at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
>  at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:611) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4556) Sync branch 4.x-cdh5.11.2

2018-01-24 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4556:
-
Description: Syncing 4.x-cdh5.11.2 with master - it was quite behind -  and 
version up to 4.14 .  (was: Syncing 4.x-cdh5.11.2 with master - it was quite 
behind -  .)

> Sync branch 4.x-cdh5.11.2
> -
>
> Key: PHOENIX-4556
> URL: https://issues.apache.org/jira/browse/PHOENIX-4556
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4556-patch.tar.gz
>
>
> Syncing 4.x-cdh5.11.2 with master - it was quite behind -  and version up to 
> 4.14 .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4556) Sync branch 4.x-cdh5.11.2

2018-01-24 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338575#comment-16338575
 ] 

Pedro Boado commented on PHOENIX-4556:
--

Can anyone please decompress and {{git am}} the attached file? 

> Sync branch 4.x-cdh5.11.2
> -
>
> Key: PHOENIX-4556
> URL: https://issues.apache.org/jira/browse/PHOENIX-4556
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4556-patch.tar.gz
>
>
> Syncing 4.x-cdh5.11.2 with master - it was quite behind -  .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4556) Sync branch 4.x-cdh5.11.2

2018-01-24 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4556:
-
Attachment: PHOENIX-4556-patch.tar.gz

> Sync branch 4.x-cdh5.11.2
> -
>
> Key: PHOENIX-4556
> URL: https://issues.apache.org/jira/browse/PHOENIX-4556
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4556-patch.tar.gz
>
>
> Syncing 4.x-cdh5.11.2 with master - it was quite behind -  .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4556) Sync branch 4.x-cdh5.11.2

2018-01-24 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4556:
-
Summary: Sync branch 4.x-cdh5.11.2  (was: Sync branch 4.x-cdh-5.11.2)

> Sync branch 4.x-cdh5.11.2
> -
>
> Key: PHOENIX-4556
> URL: https://issues.apache.org/jira/browse/PHOENIX-4556
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
>
> Syncing 4.x-cdh5.11.2 with master - it was quite behind -  .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4556) Sync branch 4.x-cdh-5.11.2

2018-01-24 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4556:
-
Description: Syncing 4.x-cdh5.11.2 with master - it was quite behind -  .  
(was: Syncing 4.x-HBase-1.2 with master (  two commits missing ) .)

> Sync branch 4.x-cdh-5.11.2
> --
>
> Key: PHOENIX-4556
> URL: https://issues.apache.org/jira/browse/PHOENIX-4556
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
>
> Syncing 4.x-cdh5.11.2 with master - it was quite behind -  .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4556) Sync branch 4.x-cdh-5.11.2

2018-01-24 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-4556:


 Summary: Sync branch 4.x-cdh-5.11.2
 Key: PHOENIX-4556
 URL: https://issues.apache.org/jira/browse/PHOENIX-4556
 Project: Phoenix
  Issue Type: Task
Affects Versions: verify
Reporter: Pedro Boado
Assignee: James Taylor
 Fix For: 4.14.0


Syncing 4.x-HBase-1.2 with master (  two commits missing ) .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4550) Allow declaration of max columns on base physical table

2018-01-24 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338493#comment-16338493
 ] 

James Taylor edited comment on PHOENIX-4550 at 1/25/18 1:06 AM:


I think there's a theoretical problem with updatable views in general. There 
could be multiple views for the same row. This is arguably a situation we may 
want to prevent, but we're not doing that today. For example, say you have the 
following hierarchy:

T (A, B, C)
V1 (D, E) FROM T WHERE A = 1
V2 (F, G) FROM T WHERE A = 1 and B = 2

The same rows in table T could be in both V1 and V2. So then T would occupy 
positions 1-3, V1 would occupy positions 4-5, and V2 would occupy positions 
6-7. Depending on which view you updated through, you'd have nulls in either 
positions 4-5 or 6-7. In cases like this, there's no advantage to having a 
mapping or declaring the max number of columns.

If we detect this and disallow it at creation time, we can pursue this JIRA. 
I've filed PHOENIX-4555 for that. In reality, we don't have use cases in which 
view rows overlap, so this is kind of a theoretical problem.

So assuming the views aren't overlapping, how would you deal with columns that 
have been dropped? Also, are you thinking to push this map through every 
SingleCellColumnExpression? Wouldn't that be expensive, especially if there are 
many columns and many column references in a query?

With the alternative, preallocating a fixed number of columns, you'd need to 
push the preallocated number plus the original starting column qualifier of a 
view to figure out the array position. The downside is that the preallocated 
columns would be wasteful.

Not sure that the map idea solves the issue of when a column is added to a base 
table since it needs to be in the same array position for all rows.


was (Author: jamestaylor):
I think there's a theoretical problem with updatable views in general. There 
could be multiple views for the same row. This is arguably a situation we may 
want to prevent, but we're not doing that today. For example, say you have the 
following hierarchy:

T (A, B, C)
V1 (D, E) FROM T WHERE A = 1
V2 (F, G) FROM T WHERE A = 1 and B = 2

The same rows in table T could be in both V1 and V2. So then T would occupy 
positions 1-3, V1 would occupy positions 4-5, and V2 would occupy positions 
6-7. Depending on which view you updated through, you'd have nulls in either 
positions 4-5 or 6-7. In cases like this, there's no advantage to having a 
mapping or declaring the max number of columns.

If we detect this and disallow it at creation time, we can pursue this JIRA. 
I'll file a separate JIRA for that. In reality, we don't have use cases in 
which view rows overlap, so this is kind of a theoretical problem.

So assuming the views aren't overlapping, how would you deal with columns that 
have been dropped? Also, are you thinking to push this map through every 
SingleCellColumnExpression? Wouldn't that be expensive, especially if there are 
many columns and many column references in a query?

With the alternative, preallocating a fixed number of columns, you'd need to 
push the preallocated number plus the original starting column qualifier of a 
view to figure out the array position. The downside is that the preallocated 
columns would be wasteful.

Not sure that the map idea solves the issue of when a column is added to a base 
table since it needs to be in the same array position for all rows.

> Allow declaration of max columns on base physical table
> ---
>
> Key: PHOENIX-4550
> URL: https://issues.apache.org/jira/browse/PHOENIX-4550
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Priority: Major
>
> By declaring the max number of columns on a base table, we can optimize the 
> storage for SINGLE_CELL_ARRAY_WITH_OFFSETS by not storing null values for the 
> columns preceding the initial column of a view. This will make a huge 
> difference in storage when you have a base table with many views. For example:
> {code}
> -- Declare that the base table will have no more than 10 columns
> CREATE IMMUTABLE TABLE base (k1 VARCHAR, prefix CHAR(3) v1 DATE,
> CONSTRAINT pk PRIMARY KEY (k1, prefix))
> MULTI_TENANT = true,
> MAX_COLUMNS = 10;
> CREATE VIEW v1(k2 VARCHAR PRIMARY KEY, v2 VARCHAR, v3 VARCHAR)
> AS SELECT * FROM base WHERE prefix = 'A00';
> CREATE VIEW v2(k2 VARCHAR PRIMARY KEY, v2 VARCHAR, v3 VARCHAR);
> AS SELECT * FROM base WHERE prefix = 'A10';
> ...
> {code}
> As the number of views grow, the difference between the base table column 
> encoding (column #1) and the starting column number of the view (since the 
> starting offset is determined by an incrementing value on the base table) 
> will increase. This bloats the storage as we 

[jira] [Created] (PHOENIX-4555) Only mark view as updatable if rows cannot overlap with other updatable views

2018-01-24 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4555:
-

 Summary: Only mark view as updatable if rows cannot overlap with 
other updatable views
 Key: PHOENIX-4555
 URL: https://issues.apache.org/jira/browse/PHOENIX-4555
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor


We'll run into issues if updatable sibling views overlap with each other. For 
example, say you have the following hierarchy:

T (A, B, C)
V1 (D, E) FROM T WHERE A = 1
V2 (F, G) FROM T WHERE A = 1 and B = 2

In this case, there's no way to update both V1 and v2 columns. Secondary 
indexes wouldn't work either, if you had one on each V1 & V2. 

We should restrict updatable views to
- views that filter on PK column(s)
- sibling views filter on same set of PK column(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4550) Allow declaration of max columns on base physical table

2018-01-24 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338493#comment-16338493
 ] 

James Taylor commented on PHOENIX-4550:
---

I think there's a theoretical problem with updatable views in general. There 
could be multiple views for the same row. This is arguably a situation we may 
want to prevent, but we're not doing that today. For example, say you have the 
following hierarchy:

T (A, B, C)
V1 (D, E) FROM T WHERE A = 1
V2 (F, G) FROM T WHERE A = 1 and B = 2

The same rows in table T could be in both V1 and V2. So then T would occupy 
positions 1-3, V1 would occupy positions 4-5, and V2 would occupy positions 
6-7. Depending on which view you updated through, you'd have nulls in either 
positions 4-5 or 6-7. In cases like this, there's no advantage to having a 
mapping or declaring the max number of columns.

If we detect this and disallow it at creation time, we can pursue this JIRA. 
I'll file a separate JIRA for that. In reality, we don't have use cases in 
which view rows overlap, so this is kind of a theoretical problem.

So assuming the views aren't overlapping, how would you deal with columns that 
have been dropped? Also, are you thinking to push this map through every 
SingleCellColumnExpression? Wouldn't that be expensive, especially if there are 
many columns and many column references in a query?

With the alternative, preallocating a fixed number of columns, you'd need to 
push the preallocated number plus the original starting column qualifier of a 
view to figure out the array position. The downside is that the preallocated 
columns would be wasteful.

Not sure that the map idea solves the issue of when a column is added to a base 
table since it needs to be in the same array position for all rows.

> Allow declaration of max columns on base physical table
> ---
>
> Key: PHOENIX-4550
> URL: https://issues.apache.org/jira/browse/PHOENIX-4550
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Priority: Major
>
> By declaring the max number of columns on a base table, we can optimize the 
> storage for SINGLE_CELL_ARRAY_WITH_OFFSETS by not storing null values for the 
> columns preceding the initial column of a view. This will make a huge 
> difference in storage when you have a base table with many views. For example:
> {code}
> -- Declare that the base table will have no more than 10 columns
> CREATE IMMUTABLE TABLE base (k1 VARCHAR, prefix CHAR(3) v1 DATE,
> CONSTRAINT pk PRIMARY KEY (k1, prefix))
> MULTI_TENANT = true,
> MAX_COLUMNS = 10;
> CREATE VIEW v1(k2 VARCHAR PRIMARY KEY, v2 VARCHAR, v3 VARCHAR)
> AS SELECT * FROM base WHERE prefix = 'A00';
> CREATE VIEW v2(k2 VARCHAR PRIMARY KEY, v2 VARCHAR, v3 VARCHAR);
> AS SELECT * FROM base WHERE prefix = 'A10';
> ...
> {code}
> As the number of views grow, the difference between the base table column 
> encoding (column #1) and the starting column number of the view (since the 
> starting offset is determined by an incrementing value on the base table) 
> will increase. This bloats the storage as we need to store null values for 
> column encodings between the base table column and the starting column of the 
> view.
> Instead, we'll pass through the MAX_COLUMNS value for queries and anything 
> column encoding less than this we know it'll be at the start. Anything 
> greater and we'll start the search from  -  column encoding>.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4554) Sync branch 4.x-HBase-1.2

2018-01-24 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado reassigned PHOENIX-4554:


Assignee: James Taylor  (was: Pedro Boado)

> Sync branch 4.x-HBase-1.2
> -
>
> Key: PHOENIX-4554
> URL: https://issues.apache.org/jira/browse/PHOENIX-4554
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: 
> 0001-PHOENIX-4437-Make-QueryPlan.getEstimatedBytesToScan-.patch, 
> 0002-PHOENIX-4488-Cache-config-parameters-for-MetaDataEnd.patch
>
>
> Syncing 4.x-HBase-1.2 with master (  two commits missing ) .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4554) Sync branch 4.x-HBase-1.2

2018-01-24 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338491#comment-16338491
 ] 

Pedro Boado commented on PHOENIX-4554:
--

Can anyone {{git am}} these two patches, please?

> Sync branch 4.x-HBase-1.2
> -
>
> Key: PHOENIX-4554
> URL: https://issues.apache.org/jira/browse/PHOENIX-4554
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: 
> 0001-PHOENIX-4437-Make-QueryPlan.getEstimatedBytesToScan-.patch, 
> 0002-PHOENIX-4488-Cache-config-parameters-for-MetaDataEnd.patch
>
>
> Syncing 4.x-HBase-1.2 with master (  two commits missing ) .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4554) Sync branch 4.x-HBase-1.2

2018-01-24 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4554:
-
Attachment: 0002-PHOENIX-4488-Cache-config-parameters-for-MetaDataEnd.patch
0001-PHOENIX-4437-Make-QueryPlan.getEstimatedBytesToScan-.patch

> Sync branch 4.x-HBase-1.2
> -
>
> Key: PHOENIX-4554
> URL: https://issues.apache.org/jira/browse/PHOENIX-4554
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: 
> 0001-PHOENIX-4437-Make-QueryPlan.getEstimatedBytesToScan-.patch, 
> 0002-PHOENIX-4488-Cache-config-parameters-for-MetaDataEnd.patch
>
>
> Syncing 4.x-HBase-1.2 with master (  two commits missing ) .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4554) Sync branch 4.x-HBase-1.2

2018-01-24 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4554:
-
Description: Syncing 4.x-HBase-1.2 with master (  two commits missing ) .  
(was: Ticket for requesting a full test run for PR #289, syncing 4.x-HBase-1.2 
with master.)

> Sync branch 4.x-HBase-1.2
> -
>
> Key: PHOENIX-4554
> URL: https://issues.apache.org/jira/browse/PHOENIX-4554
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>Priority: Major
> Fix For: 4.14.0
>
>
> Syncing 4.x-HBase-1.2 with master (  two commits missing ) .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4554) Sync branch 4.x-HBase-1.2

2018-01-24 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-4554:


 Summary: Sync branch 4.x-HBase-1.2
 Key: PHOENIX-4554
 URL: https://issues.apache.org/jira/browse/PHOENIX-4554
 Project: Phoenix
  Issue Type: Task
Affects Versions: verify
Reporter: Pedro Boado
Assignee: Pedro Boado
 Fix For: 4.14.0


Ticket for requesting a full test run for PR #289, syncing 4.x-HBase-1.2 with 
master.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4548) UpgradeUtil.mapChildViewsToNamespace does not handle multi-tenant views that have the same name.

2018-01-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338439#comment-16338439
 ] 

Hudson commented on PHOENIX-4548:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1920 (See 
[https://builds.apache.org/job/Phoenix-master/1920/])
PHOENIX-4548 UpgradeUtil.mapChildViewsToNamespace does not handle (tdsilva: rev 
3a6c76f122d7df1aa6fe9eb76f100ea23d298a03)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/PhoenixDriverIT.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java


> UpgradeUtil.mapChildViewsToNamespace does not handle multi-tenant views that 
> have the same name.
> 
>
> Key: PHOENIX-4548
> URL: https://issues.apache.org/jira/browse/PHOENIX-4548
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4548.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4414) Exception while using database metadata commands on tenant specific connection

2018-01-24 Thread Mujtaba Chohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mujtaba Chohan resolved PHOENIX-4414.
-
Resolution: Fixed

> Exception while using database metadata commands on tenant specific connection
> --
>
> Key: PHOENIX-4414
> URL: https://issues.apache.org/jira/browse/PHOENIX-4414
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Mujtaba Chohan
>Assignee: Mujtaba Chohan
>Priority: Minor
> Attachments: PHOENIX-4414.patch, PHOENIX-4414_v2.patch
>
>
> This is when using tenant specific connection from Sqlline.
> {noformat}
> Error: ERROR 602 (42P00): Syntax error. Missing "LPAREN" at line 2, column 
> 746. (state=42P00,code=602)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 602 (42P00): 
> Syntax error. Missing "LPAREN" at line 2, column 746.
>   at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
>   at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1529)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1612)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1653)
>   at 
> org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.getColumns(PhoenixDatabaseMetaData.java:557)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4414) Exception while using database metadata commands on tenant specific connection

2018-01-24 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4414:
--
Affects Version/s: 4.14.0

> Exception while using database metadata commands on tenant specific connection
> --
>
> Key: PHOENIX-4414
> URL: https://issues.apache.org/jira/browse/PHOENIX-4414
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Mujtaba Chohan
>Assignee: Mujtaba Chohan
>Priority: Minor
> Attachments: PHOENIX-4414.patch, PHOENIX-4414_v2.patch
>
>
> This is when using tenant specific connection from Sqlline.
> {noformat}
> Error: ERROR 602 (42P00): Syntax error. Missing "LPAREN" at line 2, column 
> 746. (state=42P00,code=602)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 602 (42P00): 
> Syntax error. Missing "LPAREN" at line 2, column 746.
>   at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
>   at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1529)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1612)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1653)
>   at 
> org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.getColumns(PhoenixDatabaseMetaData.java:557)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4533) Phoenix Query Server should not use SPNEGO principal to proxy user requests

2018-01-24 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338308#comment-16338308
 ] 

Josh Elser commented on PHOENIX-4533:
-

Using separate Kerberos identities for accepting requests and talking to HBase 
sounds like a great idea (especially, given the limitations of SPNEGO with 
Kerberos and Hadoop's impersonation rules).

My biggest concern is ensuring that ticket renewal happens for both principals, 
and that the HTTP principal is not used to talk to HBase at all. I'm thinking a 
setup like the following:

* Set short ticket lifetimes for the HTTP and hbase client kerberos principals 
(e.g. 10m)
* The HTTP user is not authorized to interact with any HBase tables, nor 
impersonate any end users
* Set up a PQS client to read from a Phoenix table through PQS at a regular 
interval (e.g. every 15s). Something trivial like a {{select *}} would be fine.

Then, just let this run for a few hours. At the end of the test, PQS should 
still be operational and the client can still read the Phoenix table through 
PQS.

It's a little elaborate to try to encapsulate this in an IT, but if you could 
run a standalone test, Lev, that'd be awesome.

> Phoenix Query Server should not use SPNEGO principal to proxy user requests
> ---
>
> Key: PHOENIX-4533
> URL: https://issues.apache.org/jira/browse/PHOENIX-4533
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lev Bronshtein
>Assignee: Lev Bronshtein
>Priority: Minor
> Attachments: PHOENIX-4533.1.patch
>
>
> Currently the HTTP/ principal is used by various components in the HADOOP 
> ecosystem to perform SPNEGO authentication.  Since there can only be one 
> HTTP/ per host, even outside of the Hadoop ecosystem, the keytab containing 
> key material for local HTTP/ principal is shared among a few applications.  
> With so many applications having access to the HTTP/ credentials, this 
> increases the chances of an attack on the proxy user capabilities of Hadoop.  
> This JIRA proposes that two different key tabs can be used to
> 1. Authenticate kerberized web requests
> 2. Communicate with the phoenix back end



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Phoenix JDBC client connection settings

2018-01-24 Thread Josh Elser

Hi Hidir,

In most (maybe all) cases, Phoenix (thick) driver configuration 
properties can also be picked up off of the classpath via hbase-site.xml.


For tools that allow you to specify additional classpath elements, you 
can add the directory containing hbase-site.xml with your configuration 
properties set. For tools that do not let you do this, you can add a 
copy of hbase-site.xml to the phoenix-client.jar itself.


I would agree that being able to specify properties like this in the URL 
would be nice. You would have my blessing to implement such an 
improvement :)


On 1/24/18 10:59 AM, Aras, Hidir wrote:

Dear phoenix developers,

I would like to hint at an issue related to the configuration of the 
phoenix JDBC driver for clients (not thin-client!). Currently, 
client-side connection properties can only be set in Java code, like this:


  Connection conn= *null*;

*try*{

     Properties props= *new*Properties();

props.setProperty("phoenix.functions.allowUserDefinedFunctions",

"true");

 
Class./forName/("org.apache.phoenix.jdbc.PhoenixDriver");


conn= DriverManager./getConnection/(

"jdbc:phoenix:tdm-p-nn01:2181:/hbase-unsecure", props);

  } *catch*(Exception e) {

e.printStackTrace();

*/LOGGER/*.error("error occurs "+ e.getMessage());

}

In order to be integrated into 3^rd party client applications that just 
need to


integrate the jdbc driver and establish a connection via the URI, it is 
  to my knowledge not possible to set


the client connection settings like:

jdbc:phoenix::2181:/hbase-unsecure; 
phoenix.functions.allowUserDefinedFunctions; 
phoenix.query.timeoutMs=180; …


Also, I don’t see any other option to set the client connection 
settings, e.g. for enabling phoenix user-defined functions in 3^rd party 
client applications like logstash, etc.


I would much appreciate any workaround or hint for resolving this 
problem … in particular for enabling udfs in 3^rd party jdbc clients.


Thanks and best regards,

Hidir





FIZ Karlsruhe - Leibniz-Institut für Informationsinfrastruktur GmbH.
Sitz der Gesellschaft: Eggenstein-Leopoldshafen, Amtsgericht Mannheim 
HRB 101892.

Geschäftsführerin: Sabine Brünger-Weilandt.
Vorsitzender des Aufsichtsrats: MinDirig Dr. Stefan Luther.



[jira] [Assigned] (PHOENIX-4533) Phoenix Query Server should not use SPNEGO principal to proxy user requests

2018-01-24 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned PHOENIX-4533:
---

Assignee: Lev Bronshtein

> Phoenix Query Server should not use SPNEGO principal to proxy user requests
> ---
>
> Key: PHOENIX-4533
> URL: https://issues.apache.org/jira/browse/PHOENIX-4533
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lev Bronshtein
>Assignee: Lev Bronshtein
>Priority: Minor
> Attachments: PHOENIX-4533.1.patch
>
>
> Currently the HTTP/ principal is used by various components in the HADOOP 
> ecosystem to perform SPNEGO authentication.  Since there can only be one 
> HTTP/ per host, even outside of the Hadoop ecosystem, the keytab containing 
> key material for local HTTP/ principal is shared among a few applications.  
> With so many applications having access to the HTTP/ credentials, this 
> increases the chances of an attack on the proxy user capabilities of Hadoop.  
> This JIRA proposes that two different key tabs can be used to
> 1. Authenticate kerberized web requests
> 2. Communicate with the phoenix back end



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4550) Allow declaration of max columns on base physical table

2018-01-24 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338158#comment-16338158
 ] 

Thomas D'Silva commented on PHOENIX-4550:
-

Instead of serializing  a map, could we just serialize the starting col 
qualifier of the contiguous range and the starting array index? For example

col qualifiers : 1-5, 100-105, 500,545-546,600
array index : 1-5, 6-10,11,12-13,14

100,6
500,11
545,12
600,14

This structure could be created when the view is created and updated if columns 
are added. We would need to store it in the PTable of the view. 
We could also only serialize the col qualifieres, array indexes that are 
required by the scan.
It would be more difficult to implement though.

> Allow declaration of max columns on base physical table
> ---
>
> Key: PHOENIX-4550
> URL: https://issues.apache.org/jira/browse/PHOENIX-4550
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Priority: Major
>
> By declaring the max number of columns on a base table, we can optimize the 
> storage for SINGLE_CELL_ARRAY_WITH_OFFSETS by not storing null values for the 
> columns preceding the initial column of a view. This will make a huge 
> difference in storage when you have a base table with many views. For example:
> {code}
> -- Declare that the base table will have no more than 10 columns
> CREATE IMMUTABLE TABLE base (k1 VARCHAR, prefix CHAR(3) v1 DATE,
> CONSTRAINT pk PRIMARY KEY (k1, prefix))
> MULTI_TENANT = true,
> MAX_COLUMNS = 10;
> CREATE VIEW v1(k2 VARCHAR PRIMARY KEY, v2 VARCHAR, v3 VARCHAR)
> AS SELECT * FROM base WHERE prefix = 'A00';
> CREATE VIEW v2(k2 VARCHAR PRIMARY KEY, v2 VARCHAR, v3 VARCHAR);
> AS SELECT * FROM base WHERE prefix = 'A10';
> ...
> {code}
> As the number of views grow, the difference between the base table column 
> encoding (column #1) and the starting column number of the view (since the 
> starting offset is determined by an incrementing value on the base table) 
> will increase. This bloats the storage as we need to store null values for 
> column encodings between the base table column and the starting column of the 
> view.
> Instead, we'll pass through the MAX_COLUMNS value for queries and anything 
> column encoding less than this we know it'll be at the start. Anything 
> greater and we'll start the search from  -  column encoding>.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Phoenix JDBC client connection settings

2018-01-24 Thread Aras, Hidir
Dear phoenix developers,

I would like to hint at an issue related to the configuration of the phoenix 
JDBC driver for clients (not thin-client!). Currently, client-side connection 
properties can only be set in Java code, like this:

 Connection conn = null;
 try {
Properties props = new Properties();

props.setProperty("phoenix.functions.allowUserDefinedFunctions",
 "true");
Class.forName("org.apache.phoenix.jdbc.PhoenixDriver");
conn = DriverManager.getConnection(
 
"jdbc:phoenix:tdm-p-nn01:2181:/hbase-unsecure", props);

 } catch (Exception e) {
e.printStackTrace();
LOGGER.error("error occurs " + e.getMessage());
 }

In order to be integrated into 3rd party client applications that just need to
integrate the jdbc driver and establish a connection via the URI, it is  to my 
knowledge not possible to set
the client connection settings like:

jdbc:phoenix::2181:/hbase-unsecure; 
phoenix.functions.allowUserDefinedFunctions; phoenix.query.timeoutMs=180; 
...

Also, I don't see any other option to set the client connection settings, e.g. 
for enabling phoenix user-defined functions in 3rd party client applications 
like logstash, etc.

I would much appreciate any workaround or hint for resolving this problem ... 
in particular for enabling udfs in 3rd party jdbc clients.

Thanks and best regards,
Hidir






--

FIZ Karlsruhe - Leibniz-Institut für Informationsinfrastruktur GmbH.
Sitz der Gesellschaft: Eggenstein-Leopoldshafen, Amtsgericht Mannheim HRB 
101892.
Geschäftsführerin: Sabine Brünger-Weilandt.
Vorsitzender des Aufsichtsrats: MinDirig Dr. Stefan Luther.

[jira] [Updated] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-01-24 Thread Ihor Krysenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ihor Krysenko updated PHOENIX-4553:
---
Summary: HBase Master could not start with activated APACHE_PHOENIX parcel  
(was: HBase Master could not start with enabled APACHE_PHOENIX parcel)

> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Priority: Major
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. SLF4J: Actual binding is of type 
> [org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
> "RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
> java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
> Provider 
> org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo 
> not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
> org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333)
>  at 
> org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
>  at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
>  at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:611) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4553) HBase Master could not start with enabled APACHE_PHOENIX parcel

2018-01-24 Thread Ihor Krysenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ihor Krysenko updated PHOENIX-4553:
---
Description: 
After activation parcel HBase Master and Region could not start. Some problems 
with shaded thin-client, because if it remove from the parcel, everything work 
great.

Please help.

I think [GitHub 
commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
 have influence on this bug.

Below I put startup log for the HBaseMaster
{code:java}
SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
[jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
[jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
[jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
[jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation. SLF4J: Actual binding is of type 
[org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
"RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
Provider 
org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo not 
a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333) 
at 
org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
 at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
 at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
 at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
 at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
 at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
 at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:611) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745){code}

  was:
After activation parcel HBase Master and Region could not start. Some problems 
with shaded thin-client, because if it remove from the parcel, everything work 
great.

Please help.

Below I put startup log for the HBaseMaster
{code:java}
SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
[jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
[jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
[jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
[jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation. SLF4J: Actual binding is of type 
[org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
"RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
Provider 
org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo not 
a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333) 
at 

[jira] [Updated] (PHOENIX-4553) HBase Master could not start with enabled APACHE_PHOENIX parcel

2018-01-24 Thread Ihor Krysenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ihor Krysenko updated PHOENIX-4553:
---
Description: 
After activation parcel HBase Master and Region could not start. Some problems 
with shaded thin-client, because if it remove from the parcel, everything work 
great.

Please help.

Below I put startup log for the HBaseMaster
{code:java}
SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
[jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
[jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
[jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
[jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation. SLF4J: Actual binding is of type 
[org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
"RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
Provider 
org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo not 
a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333) 
at 
org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
 at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
 at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
 at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
 at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
 at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
 at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:611) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745){code}

  was:
After activation parcel HBase Master and Region could not start. Some problems 
with shaded thin-client, because if it remove from the parcel, everything work 
great.

Please help

SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
[jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
[jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
[jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
[jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation. SLF4J: Actual binding is of type 
[org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
"RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
Provider 
org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo not 
a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333) 
at 
org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
 at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 

[jira] [Created] (PHOENIX-4553) HBase Master could not start with enabled APACHE_PHOENIX parcel

2018-01-24 Thread Ihor Krysenko (JIRA)
Ihor Krysenko created PHOENIX-4553:
--

 Summary: HBase Master could not start with enabled APACHE_PHOENIX 
parcel
 Key: PHOENIX-4553
 URL: https://issues.apache.org/jira/browse/PHOENIX-4553
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.13.2-cdh5.11.2
 Environment: CDH 5.11.2

Apache phoenix 4.13.2-cdh5.11.2
Reporter: Ihor Krysenko


After activation parcel HBase Master and Region could not start. Some problems 
with shaded thin-client, because if it remove from the parcel, everything work 
great.

Please help

SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
[jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
[jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
[jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: Found binding in 
[jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
 SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation. SLF4J: Actual binding is of type 
[org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
"RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
Provider 
org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo not 
a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333) 
at 
org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
 at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
 at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
 at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
 at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
at 
org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
 at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
 at 
org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:611) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4534) upsert/delete/upsert for the same row corrupts the indexes

2018-01-24 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16337315#comment-16337315
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-4534:
--

Pushed to 5.x branch.

> upsert/delete/upsert for the same row corrupts the indexes
> --
>
> Key: PHOENIX-4534
> URL: https://issues.apache.org/jira/browse/PHOENIX-4534
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0
>Reporter: Romil Choksi
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
>  Labels: HBase-2.0
> Fix For: 5.0
>
> Attachments: PHOENIX-4534.patch, PHOENIX-4534_v2.patch, 
> PHOENIX-4534_v3.patch
>
>
> If we delete and upsert again the same row, the corresponding index has a 
> null value. 
> {noformat}
> 0: jdbc:phoenix:> create table a (id integer primary key, f float);
> No rows affected (2.272 seconds)
> 0: jdbc:phoenix:> create index i1 on a (f);
> No rows affected (5.769 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.021 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> | 0.5  | 1|
> +--+--+
> 1 row selected (0.016 seconds)
> 0: jdbc:phoenix:> delete from a where id = 1;
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> +--+--+
> No rows selected (0.015 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +---+--+
> |  0:F  | :ID  |
> +---+--+
> | null  | 1|
> +---+--+
> 1 row selected (0.013 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4548) UpgradeUtil.mapChildViewsToNamespace does not handle multi-tenant views that have the same name.

2018-01-24 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16337115#comment-16337115
 ] 

Ankit Singhal commented on PHOENIX-4548:


[~tdsilva] Looks good, +1 

> UpgradeUtil.mapChildViewsToNamespace does not handle multi-tenant views that 
> have the same name.
> 
>
> Key: PHOENIX-4548
> URL: https://issues.apache.org/jira/browse/PHOENIX-4548
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4548.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)