[jira] [Comment Edited] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-03-09 Thread Ihor Krysenko (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16394062#comment-16394062
 ] 

Ihor Krysenko edited comment on PHOENIX-4553 at 3/10/18 6:25 AM:
-

[~pboado] I can check only in Monday, btw, I see, it will work, thanks.


was (Author: neospyk):
[~pboado] I can check only in Monday, btw, I see, it will work thanks.

> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Assignee: Pedro Boado
>Priority: Minor
> Attachments: PHOENIX-4553-00.patch, hbase-master.log, 
> hbase-region.log, master-stderr.log, master-stdout.log, region-stderr.log, 
> region-stdout.log
>
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. SLF4J: Actual binding is of type 
> [org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
> "RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
> java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
> Provider 
> org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo 
> not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
> org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333)
>  at 
> org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
>  at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
>  at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:611) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-03-09 Thread Ihor Krysenko (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16394062#comment-16394062
 ] 

Ihor Krysenko commented on PHOENIX-4553:


[~pboado] I can check only in Monday, btw, I see, it will work thanks.

> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Assignee: Pedro Boado
>Priority: Minor
> Attachments: PHOENIX-4553-00.patch, hbase-master.log, 
> hbase-region.log, master-stderr.log, master-stdout.log, region-stderr.log, 
> region-stdout.log
>
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. SLF4J: Actual binding is of type 
> [org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
> "RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
> java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
> Provider 
> org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo 
> not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
> org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333)
>  at 
> org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
>  at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
>  at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:611) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4418) UPPER() and LOWER() functions should be locale-aware

2018-03-09 Thread Shehzaad Nakhoda (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16394059#comment-16394059
 ] 

Shehzaad Nakhoda commented on PHOENIX-4418:
---

[~tdsilva] I've rebased the patch (PHOENIX-4418_v3.patch) - in case _v2.patch 
doesn't work.

thanks

> UPPER() and LOWER() functions should be locale-aware
> 
>
> Key: PHOENIX-4418
> URL: https://issues.apache.org/jira/browse/PHOENIX-4418
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.0
>Reporter: Shehzaad Nakhoda
>Assignee: Shehzaad Nakhoda
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4418_v1.patch, PHOENIX-4418_v2.patch, 
> PHOENIX-4418_v3.patch, upper_lower_locale_doc.patch
>
>
> Correct conversion of a string to upper or lower case depends on Locale.
> Java's upper case and lower case conversion routines allow passing in a 
> locale.
> It should be possible to pass in a locale to UPPER() and LOWER() in Phoenix 
> so that locale-specific case conversion can be supported in Phoenix.
> See java.lang.String#toUpperCase()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4418) UPPER() and LOWER() functions should be locale-aware

2018-03-09 Thread Shehzaad Nakhoda (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shehzaad Nakhoda updated PHOENIX-4418:
--
Attachment: PHOENIX-4418_v3.patch

> UPPER() and LOWER() functions should be locale-aware
> 
>
> Key: PHOENIX-4418
> URL: https://issues.apache.org/jira/browse/PHOENIX-4418
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.0
>Reporter: Shehzaad Nakhoda
>Assignee: Shehzaad Nakhoda
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4418_v1.patch, PHOENIX-4418_v2.patch, 
> PHOENIX-4418_v3.patch, upper_lower_locale_doc.patch
>
>
> Correct conversion of a string to upper or lower case depends on Locale.
> Java's upper case and lower case conversion routines allow passing in a 
> locale.
> It should be possible to pass in a locale to UPPER() and LOWER() in Phoenix 
> so that locale-specific case conversion can be supported in Phoenix.
> See java.lang.String#toUpperCase()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4643) Implement ARRAY_REMOVE built in function

2018-03-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16394040#comment-16394040
 ] 

ASF GitHub Bot commented on PHOENIX-4643:
-

Github user JamesRTaylor commented on the issue:

https://github.com/apache/phoenix/pull/294
  
Changes look good. I've committed PHOENIX-4644, so can you make sure 
everything works without specify a default value for the second argument for 
ARRAY_REMOVE?


>  Implement ARRAY_REMOVE built in function
> -
>
> Key: PHOENIX-4643
> URL: https://issues.apache.org/jira/browse/PHOENIX-4643
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Xavier Jodoin
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] phoenix issue #294: PHOENIX-4643 Implement ARRAY_REMOVE built in function

2018-03-09 Thread JamesRTaylor
Github user JamesRTaylor commented on the issue:

https://github.com/apache/phoenix/pull/294
  
Changes look good. I've committed PHOENIX-4644, so can you make sure 
everything works without specify a default value for the second argument for 
ARRAY_REMOVE?


---


[GitHub] phoenix pull request #294: PHOENIX-4643 Implement ARRAY_REMOVE built in func...

2018-03-09 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/294#discussion_r173612840
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/ArrayRemoveFunction.java
 ---
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.expression.function;
+
+import java.util.List;
+
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.parse.FunctionParseNode;
+import org.apache.phoenix.schema.SortOrder;
+import org.apache.phoenix.schema.TypeMismatchException;
+import org.apache.phoenix.schema.types.PArrayDataTypeDecoder;
+import org.apache.phoenix.schema.types.PArrayDataTypeEncoder;
+import org.apache.phoenix.schema.types.PBinaryArray;
+import org.apache.phoenix.schema.types.PDataType;
+import org.apache.phoenix.schema.types.PVarbinary;
+import org.apache.phoenix.schema.types.PVarbinaryArray;
+
+@FunctionParseNode.BuiltInFunction(name = ArrayRemoveFunction.NAME, args = 
{
+   @FunctionParseNode.Argument(allowedTypes = { 
PBinaryArray.class, PVarbinaryArray.class }),
+   @FunctionParseNode.Argument(allowedTypes = { PVarbinary.class 
}, defaultValue = "null") })
+public class ArrayRemoveFunction extends ArrayModifierFunction {
+
+   public static final String NAME = "ARRAY_REMOVE";
+
+   public ArrayRemoveFunction() {
+   }
+
+   public ArrayRemoveFunction(List children) throws 
TypeMismatchException {
+   super(children);
+   }
+
+   @Override
+   protected boolean modifierFunction(ImmutableBytesWritable ptr, int 
length, int offset, byte[] arrayBytes,
+   PDataType baseType, int arrayLength, Integer maxLength, 
Expression arrayExp) {
+   SortOrder sortOrder = arrayExp.getSortOrder();
+
+   if (ptr.getLength() == 0 || arrayBytes.length == 0) {
+   ptr.set(arrayBytes, offset, length);
+   return true;
+   }
+
+   PArrayDataTypeEncoder arrayDataTypeEncoder = new 
PArrayDataTypeEncoder(baseType, sortOrder);
+
+   for (int arrayIndex = 0; arrayIndex < arrayLength; 
arrayIndex++) {
+   ImmutableBytesWritable ptr2 = new 
ImmutableBytesWritable(arrayBytes, offset, length);
+   PArrayDataTypeDecoder.positionAtArrayElement(ptr2, 
arrayIndex, baseType, maxLength);
+   if (baseType.compareTo(ptr2, sortOrder, ptr, sortOrder, 
baseType) != 0) {
--- End diff --

This looks good. Can you make sure you have tests around removing an 
element from an array where the element type is slightly different than the 
array element type? For example, removing an int from a long array or removing 
a long from a decimal array?


---


[jira] [Commented] (PHOENIX-4643) Implement ARRAY_REMOVE built in function

2018-03-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16394039#comment-16394039
 ] 

ASF GitHub Bot commented on PHOENIX-4643:
-

Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/294#discussion_r173612840
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/ArrayRemoveFunction.java
 ---
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.expression.function;
+
+import java.util.List;
+
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.parse.FunctionParseNode;
+import org.apache.phoenix.schema.SortOrder;
+import org.apache.phoenix.schema.TypeMismatchException;
+import org.apache.phoenix.schema.types.PArrayDataTypeDecoder;
+import org.apache.phoenix.schema.types.PArrayDataTypeEncoder;
+import org.apache.phoenix.schema.types.PBinaryArray;
+import org.apache.phoenix.schema.types.PDataType;
+import org.apache.phoenix.schema.types.PVarbinary;
+import org.apache.phoenix.schema.types.PVarbinaryArray;
+
+@FunctionParseNode.BuiltInFunction(name = ArrayRemoveFunction.NAME, args = 
{
+   @FunctionParseNode.Argument(allowedTypes = { 
PBinaryArray.class, PVarbinaryArray.class }),
+   @FunctionParseNode.Argument(allowedTypes = { PVarbinary.class 
}, defaultValue = "null") })
+public class ArrayRemoveFunction extends ArrayModifierFunction {
+
+   public static final String NAME = "ARRAY_REMOVE";
+
+   public ArrayRemoveFunction() {
+   }
+
+   public ArrayRemoveFunction(List children) throws 
TypeMismatchException {
+   super(children);
+   }
+
+   @Override
+   protected boolean modifierFunction(ImmutableBytesWritable ptr, int 
length, int offset, byte[] arrayBytes,
+   PDataType baseType, int arrayLength, Integer maxLength, 
Expression arrayExp) {
+   SortOrder sortOrder = arrayExp.getSortOrder();
+
+   if (ptr.getLength() == 0 || arrayBytes.length == 0) {
+   ptr.set(arrayBytes, offset, length);
+   return true;
+   }
+
+   PArrayDataTypeEncoder arrayDataTypeEncoder = new 
PArrayDataTypeEncoder(baseType, sortOrder);
+
+   for (int arrayIndex = 0; arrayIndex < arrayLength; 
arrayIndex++) {
+   ImmutableBytesWritable ptr2 = new 
ImmutableBytesWritable(arrayBytes, offset, length);
+   PArrayDataTypeDecoder.positionAtArrayElement(ptr2, 
arrayIndex, baseType, maxLength);
+   if (baseType.compareTo(ptr2, sortOrder, ptr, sortOrder, 
baseType) != 0) {
--- End diff --

This looks good. Can you make sure you have tests around removing an 
element from an array where the element type is slightly different than the 
array element type? For example, removing an int from a long array or removing 
a long from a decimal array?


>  Implement ARRAY_REMOVE built in function
> -
>
> Key: PHOENIX-4643
> URL: https://issues.apache.org/jira/browse/PHOENIX-4643
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Xavier Jodoin
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Expanding Phoenix support to additional versions of CDH

2018-03-09 Thread James Taylor
If you're willing to sign up to keep the CDH branches in sync with the
4.x-HBase-1.2 branches, then that seems fine. Should we look into some kind
of automation for syncing all the branches?

Another approach is a shim layer, but we've been reluctant to set that up
as there's a fair amount of initial overhead to implement.

On Thu, Mar 8, 2018 at 4:24 PM, Pedro Boado  wrote:

> Hi all,
>
> After our first release of Phoenix for CDH 5.11.2 it looks like a good idea
> expanding support to other newer versions. I'd like to discuss with you
> guys what would be the best approach as we have several options:
>
> - Releasing one single generated parcel with wider compatibility (
> cloudera-labs phoenix style ) . The issue here is that all transitive
> dependencies packaged in phoenix fatjars would be specific to a cdh version
> (let's say, cdh5.11.2) but would be running against different cdh version
> (maybe cdh5.14.0 ) . There is a small chance of incompatibility across
> versions ( even when all of them are HBase 1.2 based ) . Also we wouldn't
> be running our IT against all these cdh versions.
>
> - Maybe work further into packaging (and removing) any dependency that is
> already shipped in cdh. That would improve compatibility of this unique
> parcel version across cdh releases. But fat client fatjars would still be a
> problem for use from out of hadoop cluster.
>
> - Release several parcels specific to different cdh versions ( my favourite
> option ) . That is the safest option for better compatibility as we would
> be shipping the exact same libraries in the parcel as used in that version
> of cdh. And we'd also be doing IT for several cdh versions. Downside is a
> little bit more of release effort ( not a big deal ) , more branches in git
> and  more web server space needed for keeping several parcels ( I'm not
> sure if that is an issue )
>
> All ideas are welcome.
>
> Thanks!
>


[jira] [Updated] (PHOENIX-4139) Incorrect query result for trailing duplicate GROUP BY expression

2018-03-09 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4139:
--
Summary: Incorrect query result for trailing duplicate GROUP BY expression  
 (was: select distinct with identical aggregations return weird values )

> Incorrect query result for trailing duplicate GROUP BY expression 
> --
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4139.patch, PHOENIX-4139_v2.patch, 
> PHOENIX-4139_v3.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
> ++++
> | TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
> ++++
> | pulkit | pulkit | pulkit |
> ++++
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-03-09 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado resolved PHOENIX-4553.
--
Resolution: Fixed

Solved in commit #0aba3a9a

> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Assignee: Pedro Boado
>Priority: Minor
> Attachments: PHOENIX-4553-00.patch, hbase-master.log, 
> hbase-region.log, master-stderr.log, master-stdout.log, region-stderr.log, 
> region-stdout.log
>
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. SLF4J: Actual binding is of type 
> [org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
> "RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
> java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
> Provider 
> org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo 
> not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
> org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333)
>  at 
> org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
>  at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
>  at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:611) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-03-09 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4553:
-
Attachment: PHOENIX-4553-00.patch

> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Assignee: Pedro Boado
>Priority: Minor
> Attachments: PHOENIX-4553-00.patch, hbase-master.log, 
> hbase-region.log, master-stderr.log, master-stdout.log, region-stderr.log, 
> region-stdout.log
>
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. SLF4J: Actual binding is of type 
> [org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
> "RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
> java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
> Provider 
> org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo 
> not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
> org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333)
>  at 
> org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
>  at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
>  at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:611) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-1267) Set scan.setSmall(true) when appropriate

2018-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393897#comment-16393897
 ] 

Hudson commented on PHOENIX-1267:
-

SUCCESS: Integrated in Jenkins build PreCommit-PHOENIX-Build #1797 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1797/])
PHOENIX-1267 Set scan.setSmall(true) when appropriate (Abhishek Singh (jtaylor: 
rev abfc1ff4d37d67eb4666c4c7db24cb22f041768c)
* (edit) phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java


> Set scan.setSmall(true) when appropriate
> 
>
> Key: PHOENIX-1267
> URL: https://issues.apache.org/jira/browse/PHOENIX-1267
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Abhishek Singh Chouhan
>Priority: Major
>  Labels: SFDC, newbie
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-1267.master.patch, PHOENIX-1267.master.patch, 
> smallscan.patch, smallscan2.patch, smallscan3.patch
>
>
> There's a nice optimization that has been in HBase for a while now to set a 
> scan as "small". This prevents extra RPC calls, I believe. We should add a 
> hint for queries that forces it to be set/not set, and make our best guess on 
> when it should default to true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4148) COUNT(DISTINCT(...)) should have a memory size limit

2018-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393896#comment-16393896
 ] 

Hudson commented on PHOENIX-4148:
-

SUCCESS: Integrated in Jenkins build PreCommit-PHOENIX-Build #1797 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1797/])
PHOENIX-4148 COUNT(DISTINCT(...)) should have a memory size limit (Lars 
(jtaylor: rev 45f856af776a29eaf6ae503d6d3062b277893396)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/aggregator/DistinctValueWithCountServerAggregator.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/memory/GlobalMemoryManager.java
* (edit) phoenix-core/src/test/java/org/apache/phoenix/util/TestUtil.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/aggregator/ClientAggregators.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/aggregator/ServerAggregators.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/memory/ChildMemoryManager.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/aggregator/Aggregators.java


> COUNT(DISTINCT(...)) should have a memory size limit
> 
>
> Key: PHOENIX-4148
> URL: https://issues.apache.org/jira/browse/PHOENIX-4148
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: 4148.txt, PHOENIX-4148_v2.patch
>
>
> I just managed to kill (hang) a region server by issuing a 
> COUNT(DISTINCT(...)) query over a column with very high cardinality (20m in 
> this case).
> This is perhaps not a useful thing to do, but Phoenix should nonetheless not 
> allow to have a server fail because of a query.
> [~jamestaylor], I see there GlobalMemoryManager, but I do not quite see how 
> I'd get a reference to one, once needs a tenant id, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4644) Array modification functions should require two arguments

2018-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393898#comment-16393898
 ] 

Hudson commented on PHOENIX-4644:
-

SUCCESS: Integrated in Jenkins build PreCommit-PHOENIX-Build #1797 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1797/])
PHOENIX-4644 Array modification functions should require two arguments 
(jtaylor: rev d2715101e10ce2a5b3dfbaf8a8f23997fbc6ec58)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ArrayAppendFunctionIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/ArrayConcatFunction.java
* (add) 
phoenix-core/src/main/java/org/apache/phoenix/parse/ArrayModifierParseNode.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/parse/FunctionParseNode.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/ArrayModifierFunction.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/ExpressionUtil.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/ArrayAppendFunction.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/ArrayPrependFunction.java


> Array modification functions should require two arguments
> -
>
> Key: PHOENIX-4644
> URL: https://issues.apache.org/jira/browse/PHOENIX-4644
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4644_v1.patch, PHOENIX-4644_v2.patch
>
>
> ARRAY_APPEND, ARRAY_PREPEND, and ARRAY_CAT should require two arguments. 
> Also, if the second argument is null, we need to make sure not to have the 
> entire expression return null (but instead it should return the first 
> argument). To accomplish this, we need to have a ParseNode that overrides the 
> method controlling this and ensure it's used for these functions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4640) Don't consider STATS_UPDATE_FREQ_MS_ATTRIB in TTL for server side cache

2018-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393895#comment-16393895
 ] 

Hudson commented on PHOENIX-4640:
-

SUCCESS: Integrated in Jenkins build PreCommit-PHOENIX-Build #1797 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1797/])
PHOENIX-4640 Don't consider STATS_UPDATE_FREQ_MS_ATTRIB in TTL for (jtaylor: 
rev 32a8f3b290d29a1203485cf8aa6649f338277894)
* (edit) phoenix-core/src/main/java/org/apache/phoenix/cache/GlobalCache.java


> Don't consider STATS_UPDATE_FREQ_MS_ATTRIB in TTL for server side cache
> ---
>
> Key: PHOENIX-4640
> URL: https://issues.apache.org/jira/browse/PHOENIX-4640
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4640_v1.patch
>
>
> Since stats have their own client-side cache, there's no need to consider 
> STATS_UPDATE_FREQ_MS_ATTRIB for the server-side TTL cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4644) Array modification functions should require two arguments

2018-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393886#comment-16393886
 ] 

Hudson commented on PHOENIX-4644:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #55 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/55/])
PHOENIX-4644 Array modification functions should require two arguments 
(jtaylor: rev c1391ba03635d392f9ff230d3e6217e2d0c1ff2f)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/ArrayModifierFunction.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/ArrayConcatFunction.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ArrayAppendFunctionIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/ArrayAppendFunction.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/parse/FunctionParseNode.java
* (add) 
phoenix-core/src/main/java/org/apache/phoenix/parse/ArrayModifierParseNode.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/ArrayPrependFunction.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/ExpressionUtil.java
PHOENIX-4644 Array modification functions should require two arguments 
(jtaylor: rev c78f58a6f454966b5e0e71fcbda8ae8b8c9f09c2)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/ArrayModifierFunction.java


> Array modification functions should require two arguments
> -
>
> Key: PHOENIX-4644
> URL: https://issues.apache.org/jira/browse/PHOENIX-4644
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4644_v1.patch, PHOENIX-4644_v2.patch
>
>
> ARRAY_APPEND, ARRAY_PREPEND, and ARRAY_CAT should require two arguments. 
> Also, if the second argument is null, we need to make sure not to have the 
> entire expression return null (but instead it should return the first 
> argument). To accomplish this, we need to have a ParseNode that overrides the 
> method controlling this and ensure it's used for these functions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-1267) Set scan.setSmall(true) when appropriate

2018-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393885#comment-16393885
 ] 

Hudson commented on PHOENIX-1267:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #55 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/55/])
PHOENIX-1267 Set scan.setSmall(true) when appropriate (Abhishek Singh (jtaylor: 
rev 1cf0744024056828f42336a0fb92f0f3bff56961)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java


> Set scan.setSmall(true) when appropriate
> 
>
> Key: PHOENIX-1267
> URL: https://issues.apache.org/jira/browse/PHOENIX-1267
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Abhishek Singh Chouhan
>Priority: Major
>  Labels: SFDC, newbie
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-1267.master.patch, PHOENIX-1267.master.patch, 
> smallscan.patch, smallscan2.patch, smallscan3.patch
>
>
> There's a nice optimization that has been in HBase for a while now to set a 
> scan as "small". This prevents extra RPC calls, I believe. We should add a 
> hint for queries that forces it to be set/not set, and make our best guess on 
> when it should default to true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4148) COUNT(DISTINCT(...)) should have a memory size limit

2018-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393884#comment-16393884
 ] 

Hudson commented on PHOENIX-4148:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #55 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/55/])
PHOENIX-4148 COUNT(DISTINCT(...)) should have a memory size limit (Lars 
(jtaylor: rev e24b29d282266c0146e1e66dee274416e1921dae)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/aggregator/ClientAggregators.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/memory/GlobalMemoryManager.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/aggregator/Aggregators.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/aggregator/DistinctValueWithCountServerAggregator.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/memory/ChildMemoryManager.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
* (edit) phoenix-core/src/test/java/org/apache/phoenix/util/TestUtil.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/expression/aggregator/ServerAggregators.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java


> COUNT(DISTINCT(...)) should have a memory size limit
> 
>
> Key: PHOENIX-4148
> URL: https://issues.apache.org/jira/browse/PHOENIX-4148
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: 4148.txt, PHOENIX-4148_v2.patch
>
>
> I just managed to kill (hang) a region server by issuing a 
> COUNT(DISTINCT(...)) query over a column with very high cardinality (20m in 
> this case).
> This is perhaps not a useful thing to do, but Phoenix should nonetheless not 
> allow to have a server fail because of a query.
> [~jamestaylor], I see there GlobalMemoryManager, but I do not quite see how 
> I'd get a reference to one, once needs a tenant id, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-03-09 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393874#comment-16393874
 ] 

Pedro Boado edited comment on PHOENIX-4553 at 3/10/18 12:43 AM:


[~neospyk]  I've come across this issue again and noticed that the classpath 
configured is actually wrong. Only 
{code:java}
/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-server.jar
{code}
should be in HBase classpath. After amending it logs are clear again.

Can you please confirm that by amending file 
/opt/cloudera/parcels/APACHE_PHOENIX/meta/phoenix_env.sh and replacing 

{code}
APPENDSTRING=`echo ${MYLIBDIR}/*.jar | sed 's/ /:/g'
{code}

by

{code}
APPENDSTRING=`echo ${MYLIBDIR}/phoenix-*-server.jar | sed 's/ /:/g'
{code}

solves the issue with the warnings appearing during RS startup?


was (Author: pboado):
[~neospyk]  I've come across this issue again and noticed that the classpath 
configured is actually wrong. Only 
{code:java}
/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-server.jar
{code}
should be in HBase classpath. After amending it logs are clear again.

Can you please confirm that by amending file 
/opt/cloudera/parcels/APACHE_PHOENIX/meta/phoenix_env.sh and replacing 

{code}
APPENDSTRING=`echo ${MYLIBDIR}/phoenix-*-server.jar | sed 's/ /:/g'
{code}

by



> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Assignee: Pedro Boado
>Priority: Minor
> Attachments: hbase-master.log, hbase-region.log, master-stderr.log, 
> master-stdout.log, region-stderr.log, region-stdout.log
>
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. SLF4J: Actual binding is of type 
> [org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
> "RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
> java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
> Provider 
> org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo 
> not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
> org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333)
>  at 
> org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
>  at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
>  at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
>  at 
> 

[jira] [Comment Edited] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-03-09 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393874#comment-16393874
 ] 

Pedro Boado edited comment on PHOENIX-4553 at 3/10/18 12:42 AM:


[~neospyk]  I've come across this issue again and noticed that the classpath 
configured is actually wrong. Only 
{code:java}
/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-server.jar
{code}
should be in HBase classpath. After amending it logs are clear again.

Can you please confirm that by amending file 
/opt/cloudera/parcels/APACHE_PHOENIX/meta/phoenix_env.sh and replacing 

{code}
APPENDSTRING=`echo ${MYLIBDIR}/phoenix-*-server.jar | sed 's/ /:/g'
{code}

by




was (Author: pboado):
[~neospyk]  I've come across this issue again and noticed that the classpath 
configured is actually wrong. Only 
{code:java}
/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-server.jar
{code}
should be in HBase classpath. After amending it logs are clear again.

> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Assignee: Pedro Boado
>Priority: Minor
> Attachments: hbase-master.log, hbase-region.log, master-stderr.log, 
> master-stdout.log, region-stderr.log, region-stdout.log
>
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. SLF4J: Actual binding is of type 
> [org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
> "RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
> java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
> Provider 
> org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo 
> not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
> org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333)
>  at 
> org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
>  at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
>  at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:611) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-03-09 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393874#comment-16393874
 ] 

Pedro Boado commented on PHOENIX-4553:
--

[~neospyk]  I've come across this issue again and noticed that the classpath 
configured is actually wrong. Only 
{code:java}
/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-server.jar
{code}
should be in HBase classpath. After amending it logs are clear again.

> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Priority: Minor
> Attachments: hbase-master.log, hbase-region.log, master-stderr.log, 
> master-stdout.log, region-stderr.log, region-stdout.log
>
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. SLF4J: Actual binding is of type 
> [org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
> "RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
> java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
> Provider 
> org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo 
> not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
> org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333)
>  at 
> org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
>  at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
>  at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:611) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4553) HBase Master could not start with activated APACHE_PHOENIX parcel

2018-03-09 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado reassigned PHOENIX-4553:


Assignee: Pedro Boado

> HBase Master could not start with activated APACHE_PHOENIX parcel
> -
>
> Key: PHOENIX-4553
> URL: https://issues.apache.org/jira/browse/PHOENIX-4553
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: CDH 5.11.2
> Apache phoenix 4.13.2-cdh5.11.2
>Reporter: Ihor Krysenko
>Assignee: Pedro Boado
>Priority: Minor
> Attachments: hbase-master.log, hbase-region.log, master-stderr.log, 
> master-stdout.log, region-stderr.log, region-stdout.log
>
>
> After activation parcel HBase Master and Region could not start. Some 
> problems with shaded thin-client, because if it remove from the parcel, 
> everything work great.
> Please help.
> I think [GitHub 
> commit|https://github.com/apache/phoenix/commit/e2c06b06fa1800b532e5d1ffa6f6ef8796cef213#diff-97e88e321a719f4389a2aa0e26fd0c8f]
>  have influence on this bug.
> Below I put startup log for the HBaseMaster
> {code:java}
> SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.2-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.2-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation. SLF4J: Actual binding is of type 
> [org.slf4j.impl.Log4jLoggerFactory] Exception in thread 
> "RpcServer.reader=1,bindAddress=0.0.0.0,port=6" 
> java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: 
> Provider 
> org.apache.phoenix.shaded.org.apache.hadoop.security.AnnotatedSecurityInfo 
> not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:231) at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:181) at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:369) at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:445) at 
> org.apache.hadoop.security.SecurityUtil.getKerberosInfo(SecurityUtil.java:333)
>  at 
> org.apache.hadoop.security.authorize.ServiceAuthorizationManager.authorize(ServiceAuthorizationManager.java:101)
>  at org.apache.hadoop.hbase.ipc.RpcServer.authorize(RpcServer.java:2347) at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.authorizeConnection(RpcServer.java:1898)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.processOneRpc(RpcServer.java:1772)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.saslReadAndProcess(RpcServer.java:1335)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(RpcServer.java:1614) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1596)
>  at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:854) 
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:635)
>  at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:611) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4640) Don't consider STATS_UPDATE_FREQ_MS_ATTRIB in TTL for server side cache

2018-03-09 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4640.
---
Resolution: Fixed

> Don't consider STATS_UPDATE_FREQ_MS_ATTRIB in TTL for server side cache
> ---
>
> Key: PHOENIX-4640
> URL: https://issues.apache.org/jira/browse/PHOENIX-4640
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4640_v1.patch
>
>
> Since stats have their own client-side cache, there's no need to consider 
> STATS_UPDATE_FREQ_MS_ATTRIB for the server-side TTL cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4148) COUNT(DISTINCT(...)) should have a memory size limit

2018-03-09 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4148:
--
Fix Version/s: 5.0.0

> COUNT(DISTINCT(...)) should have a memory size limit
> 
>
> Key: PHOENIX-4148
> URL: https://issues.apache.org/jira/browse/PHOENIX-4148
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: 4148.txt, PHOENIX-4148_v2.patch
>
>
> I just managed to kill (hang) a region server by issuing a 
> COUNT(DISTINCT(...)) query over a column with very high cardinality (20m in 
> this case).
> This is perhaps not a useful thing to do, but Phoenix should nonetheless not 
> allow to have a server fail because of a query.
> [~jamestaylor], I see there GlobalMemoryManager, but I do not quite see how 
> I'd get a reference to one, once needs a tenant id, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4148) COUNT(DISTINCT(...)) should have a memory size limit

2018-03-09 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4148.
---
Resolution: Fixed

> COUNT(DISTINCT(...)) should have a memory size limit
> 
>
> Key: PHOENIX-4148
> URL: https://issues.apache.org/jira/browse/PHOENIX-4148
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: 4148.txt, PHOENIX-4148_v2.patch
>
>
> I just managed to kill (hang) a region server by issuing a 
> COUNT(DISTINCT(...)) query over a column with very high cardinality (20m in 
> this case).
> This is perhaps not a useful thing to do, but Phoenix should nonetheless not 
> allow to have a server fail because of a query.
> [~jamestaylor], I see there GlobalMemoryManager, but I do not quite see how 
> I'd get a reference to one, once needs a tenant id, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-1267) Set scan.setSmall(true) when appropriate

2018-03-09 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1267:
--
Fix Version/s: 5.0.0

> Set scan.setSmall(true) when appropriate
> 
>
> Key: PHOENIX-1267
> URL: https://issues.apache.org/jira/browse/PHOENIX-1267
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Abhishek Singh Chouhan
>Priority: Major
>  Labels: SFDC, newbie
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-1267.master.patch, PHOENIX-1267.master.patch, 
> smallscan.patch, smallscan2.patch, smallscan3.patch
>
>
> There's a nice optimization that has been in HBase for a while now to set a 
> scan as "small". This prevents extra RPC calls, I believe. We should add a 
> hint for queries that forces it to be set/not set, and make our best guess on 
> when it should default to true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4640) Don't consider STATS_UPDATE_FREQ_MS_ATTRIB in TTL for server side cache

2018-03-09 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4640:
--
Fix Version/s: 5.0.0

> Don't consider STATS_UPDATE_FREQ_MS_ATTRIB in TTL for server side cache
> ---
>
> Key: PHOENIX-4640
> URL: https://issues.apache.org/jira/browse/PHOENIX-4640
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4640_v1.patch
>
>
> Since stats have their own client-side cache, there's no need to consider 
> STATS_UPDATE_FREQ_MS_ATTRIB for the server-side TTL cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4644) Array modification functions should require two arguments

2018-03-09 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4644:
--
Fix Version/s: 5.0.0

> Array modification functions should require two arguments
> -
>
> Key: PHOENIX-4644
> URL: https://issues.apache.org/jira/browse/PHOENIX-4644
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4644_v1.patch, PHOENIX-4644_v2.patch
>
>
> ARRAY_APPEND, ARRAY_PREPEND, and ARRAY_CAT should require two arguments. 
> Also, if the second argument is null, we need to make sure not to have the 
> entire expression return null (but instead it should return the first 
> argument). To accomplish this, we need to have a ParseNode that overrides the 
> method controlling this and ensure it's used for these functions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-1556) Base hash versus sort merge join decision on cost

2018-03-09 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393819#comment-16393819
 ] 

James Taylor commented on PHOENIX-1556:
---

Looks like this wasn't committed to some of the active branches: 4.x-HBase-0.98 
and 4.x-HBase-1.1. Our active branches are 4.x-HBase-0.98, 4.x-HBase-1.1, 
4.x-HBase-1.2, 4.x-cdh5.11.2, 4.x-HBase-1.3, master, and 5.x-HBase-2.0. Would 
you mind committed this to any of those branches you missed please, 
[~maryannxue]?

> Base hash versus sort merge join decision on cost
> -
>
> Key: PHOENIX-1556
> URL: https://issues.apache.org/jira/browse/PHOENIX-1556
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Maryann Xue
>Priority: Major
>  Labels: CostBasedOptimization
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-1556.patch
>
>
> At compile time, we know how many guideposts (i.e. how many bytes) will be 
> scanned for the RHS table. We should, by default, base the decision of using 
> the hash-join verus many-to-many join on this information.
> Another criteria (as we've seen in PHOENIX-4508) is whether or not the tables 
> being joined are already ordered by the join key. In that case, it's better 
> to always use the sort merge join.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4585) Prune local index regions used for join queries

2018-03-09 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393817#comment-16393817
 ] 

James Taylor commented on PHOENIX-4585:
---

Looks like this wasn't committed to some of the active branches: 4.x-HBase-0.98 
and 4.x-HBase-1.1. Our active branches are 4.x-HBase-0.98, 4.x-HBase-1.1, 
4.x-HBase-1.2, 4.x-cdh5.11.2, 4.x-HBase-1.3, master, and 5.x-HBase-2.0. Would 
you mind committed this to any of those branches you missed please, 
[~maryannxue]?

> Prune local index regions used for join queries
> ---
>
> Key: PHOENIX-4585
> URL: https://issues.apache.org/jira/browse/PHOENIX-4585
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: Maryann Xue
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4585.patch
>
>
> Some remaining work from PHOENIX-3941: we currently do not capture the data 
> plan as part of the index plan due to the way in which we rewrite the 
> statement during join processing. See comment here for more detail: 
> https://issues.apache.org/jira/browse/PHOENIX-3941?focusedCommentId=16351017=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16351017



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4611) Not nullable column impact on join query plans

2018-03-09 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393813#comment-16393813
 ] 

James Taylor commented on PHOENIX-4611:
---

Looks like this wasn't committed to some of the active branches: 4.x-HBase-0.98 
and 4.x-HBase-1.1. Would you mind committed this to those branches too please, 
[~maryannxue]?

> Not nullable column impact on join query plans
> --
>
> Key: PHOENIX-4611
> URL: https://issues.apache.org/jira/browse/PHOENIX-4611
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
>
> With PHOENIX-2566, there's a subtle change in projected tables in that a 
> column may end up being not nullable where as before it was nullable when the 
> family name is not null. I've kept the old behavior with 
> [this|https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=blobdiff;f=phoenix-core/src/main/java/org/apache/phoenix/compile/TupleProjectionCompiler.java;h=fccded2a896855a2a01d727b992f954a1d3fa8ab;hp=d0b900c1a9c21609b89065307433a0d37b12b72a;hb=82ba1417fdd69a0ac57cbcf2f2327d4aa371bcd9;hpb=e126dd1dda5aa80e8296d3b0c84736b22b658999]
>  commit, but would you mind confirming what the right thing to do is, 
> [~maryannxue]?
> Without this change, the explain plan changes in 
> SortMergeJoinMoreIT.testBug2894() and the assert fails. Looks like the 
> compiler ends up changing the row ordering.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4616) Move join query optimization out from QueryCompiler into QueryOptimizer

2018-03-09 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393747#comment-16393747
 ] 

James Taylor commented on PHOENIX-4616:
---

bq. I thought it would be better to keep the optimization logic all in one 
place so it would be easier to maintain and extend later on. One way is to use 
the QueryPlanVisitor here to get rid of "instanceof" check while still keeping 
everything in QueryOptimizer. What do you think?
The visitor would be an improvement over an instanceof check. I suppose having 
a method to optimize a query plan becomes more necessary if the logic is very 
different across the different query plans. I'm fine with whatever you think is 
best.

> Move join query optimization out from QueryCompiler into QueryOptimizer
> ---
>
> Key: PHOENIX-4616
> URL: https://issues.apache.org/jira/browse/PHOENIX-4616
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Maryann Xue
>Assignee: Maryann Xue
>Priority: Major
> Attachments: PHOENIX-4616.patch
>
>
> Currently we do optimization for join queries inside QueryCompiler, which 
> makes the APIs and code logic confusing, so we need to move join optimization 
> logic into QueryOptimizer.
>  Similarly, but probably with a different approach, we need to optimize UNION 
> ALL queries and derived table sub-queries in QueryOptimizer.optimize().
> Please also refer to this comment:
> https://issues.apache.org/jira/browse/PHOENIX-4585?focusedCommentId=16367616=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16367616



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4644) Array modification functions should require two arguments

2018-03-09 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393732#comment-16393732
 ] 

James Taylor commented on PHOENIX-4644:
---

Thanks for the review, [~tdsilva]. I attached a v2 patch that adds tests and 
fixes the NPE (considerably harder than I expected...)

> Array modification functions should require two arguments
> -
>
> Key: PHOENIX-4644
> URL: https://issues.apache.org/jira/browse/PHOENIX-4644
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4644_v1.patch, PHOENIX-4644_v2.patch
>
>
> ARRAY_APPEND, ARRAY_PREPEND, and ARRAY_CAT should require two arguments. 
> Also, if the second argument is null, we need to make sure not to have the 
> entire expression return null (but instead it should return the first 
> argument). To accomplish this, we need to have a ParseNode that overrides the 
> method controlling this and ensure it's used for these functions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4644) Array modification functions should require two arguments

2018-03-09 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4644:
--
Attachment: PHOENIX-4644_v2.patch

> Array modification functions should require two arguments
> -
>
> Key: PHOENIX-4644
> URL: https://issues.apache.org/jira/browse/PHOENIX-4644
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4644_v1.patch, PHOENIX-4644_v2.patch
>
>
> ARRAY_APPEND, ARRAY_PREPEND, and ARRAY_CAT should require two arguments. 
> Also, if the second argument is null, we need to make sure not to have the 
> entire expression return null (but instead it should return the first 
> argument). To accomplish this, we need to have a ParseNode that overrides the 
> method controlling this and ensure it's used for these functions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4648) Allow Add jar "path" support hdfs

2018-03-09 Thread Chinmay Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned PHOENIX-4648:
-

Assignee: Chinmay Kulkarni

> Allow Add jar "path" support hdfs
> -
>
> Key: PHOENIX-4648
> URL: https://issues.apache.org/jira/browse/PHOENIX-4648
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Ethan Wang
>Assignee: Chinmay Kulkarni
>Priority: Major
>
> Allow Add jar "path" support hdfs
> Currently, add jar "path" only support linux file system. ideally, we want to 
> support
>  
> add jar "hdfs://ethanwang.me:9000/dir/a.jar"
> it will copy and move the a.jar to local dynamic.jar.dir



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4648) Allow Add jar "path" support hdfs

2018-03-09 Thread Ethan Wang (JIRA)
Ethan Wang created PHOENIX-4648:
---

 Summary: Allow Add jar "path" support hdfs
 Key: PHOENIX-4648
 URL: https://issues.apache.org/jira/browse/PHOENIX-4648
 Project: Phoenix
  Issue Type: New Feature
Reporter: Ethan Wang


Allow Add jar "path" support hdfs

Currently, add jar "path" only support linux file system. ideally, we want to 
support

 

add jar "hdfs://ethanwang.me:9000/dir/a.jar"

it will copy and move the a.jar to local dynamic.jar.dir



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4648) Allow Add jar "path" support hdfs

2018-03-09 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393527#comment-16393527
 ] 

Ethan Wang commented on PHOENIX-4648:
-

[~ckulkarni]

[~chrajeshbab...@gmail.com]

> Allow Add jar "path" support hdfs
> -
>
> Key: PHOENIX-4648
> URL: https://issues.apache.org/jira/browse/PHOENIX-4648
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Ethan Wang
>Priority: Major
>
> Allow Add jar "path" support hdfs
> Currently, add jar "path" only support linux file system. ideally, we want to 
> support
>  
> add jar "hdfs://ethanwang.me:9000/dir/a.jar"
> it will copy and move the a.jar to local dynamic.jar.dir



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-03-09 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393524#comment-16393524
 ] 

Ethan Wang commented on PHOENIX-4231:
-

[~ckulkarni] thanks. done with Phoenix-4231-v2.patch.

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4231-v2.patch, PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-03-09 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-4231:

Attachment: PHOENIX-4231-v2.patch

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4231-v2.patch, PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-03-09 Thread Chinmay Kulkarni (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393489#comment-16393489
 ] 

Chinmay Kulkarni commented on PHOENIX-4231:
---

[~aertoria] please remove unnecessary changes to files MutationState.java and 
PhoenixStatement.java. Everything else lgtm.

 

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-03-09 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393477#comment-16393477
 ] 

Ethan Wang commented on PHOENIX-4231:
-

Thanks [~apurtell]

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-03-09 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393458#comment-16393458
 ] 

Andrew Purtell commented on PHOENIX-4231:
-

+1

Any concerns about commit? If not I will do so shortly.

> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
> Attachments: PHOENIX-4231.patch
>
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4634) Looking up a parent index table of a tenant child view fails in BaseColumnResolver createTableRef()

2018-03-09 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393285#comment-16393285
 ] 

James Taylor commented on PHOENIX-4634:
---

bq. I didn't reset it at the start of updataCache, because I thought its 
possible we might return the parent view instead of the index because of the 
following code that tries to avoids the rpc:
Good point, but I think we'd want to honor the updateCacheFrequency. We should 
perhaps rethink the value we use for it when we create that PTable here:
{code}
// add the index table with a new name so that it does not 
conflict with the existing index table
// also set update cache frequency to never since the renamed 
index is not present on the server
indexesToAdd.add(PTableImpl.makePTable(index, 
modifiedIndexName, viewStatement, Long.MAX_VALUE, view.getTenantId()));
{code}
If we always want to re-resolve it, then instead of Long.MAX_VALUE how about we 
pass in 0? Or another alternative would be to use the updateCacheFrequency from 
the parentTable?

> Looking up a parent index table of a tenant child view fails in 
> BaseColumnResolver createTableRef()
> ---
>
> Key: PHOENIX-4634
> URL: https://issues.apache.org/jira/browse/PHOENIX-4634
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4634-4.x-HBase-0.98.patch, 
> PHOENIX-4634-v2.patch, PHOENIX-4634-v3.patch, PHOENIX-4634-v4.patch
>
>
> If we are looking up a parent table index of a child view , we need to 
> resolve the view which will load the parent table indexes (instead of trying 
> to resolve the parent table index directly). 
>  
> {code:java}
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=Schema.Schema.Index#Schma.View
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:577)
> at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:391)
> at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:228)
> at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:206)
> at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:226)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:146)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:103)
> at org.apache.phoenix.compile.DeleteCompiler.compile(DeleteCompiler.java:501)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:770)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:758)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:386)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4634) Looking up a parent index table of a tenant child view fails in BaseColumnResolver createTableRef()

2018-03-09 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393238#comment-16393238
 ] 

Thomas D'Silva commented on PHOENIX-4634:
-

[~jamestaylor]
Thanks for the review. I didn't reset it at the start of updataCache, because I 
thought its possible we might return the parent view instead of the index 
because of the following code that tries to avoids the rpc:

{code}
// Do not make rpc to getTable if
// 1. table is a system table
// 2. table was already resolved as of that timestamp
// 3. table does not have a ROW_TIMESTAMP column and age is less then 
UPDATE_CACHE_FREQUENCY
if (table != null && !alwaysHitServer
&& (systemTable || resolvedTimestamp == tableResolvedTimestamp 
|| 
(table.getRowTimestampColPos() == -1 && 
connection.getMetaDataCache().getAge(tableRef) < 
table.getUpdateCacheFrequency() ))) {
return new 
MetaDataMutationResult(MutationCode.TABLE_ALREADY_EXISTS, 
QueryConstants.UNSET_TIMESTAMP, table);
}
{code}

Your right, I should have reset the result in the catch block, I will make that 
change. 

I think its happening more broadly. This is the exception we got (even with the 
present fix in PhoenixRuntime). Its looks like updateCache is being called for 
the inherited index. 

{code}
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
undefined. tableName=Schema.Schema.Index#Schma.View

org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:577)
at 
org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:391)
at 
org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:228)
at 
org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:206)
at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:226)
at 
org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:146)
{code}

I couldn't think of a better way to fix this. Should we modify getTable on the 
server to check if the table is an inherited index and handle it on the server 
(similar to what we are currently doing on the client)
{code}
// Tack on view statement to index to get proper filtering for view
String viewStatement = 
IndexUtil.rewriteViewStatement(connection, index, parentTable, 
view.getViewStatement());
PName modifiedIndexName = 
PNameFactory.newName(index.getName().getString() 
+ QueryConstants.CHILD_VIEW_INDEX_NAME_SEPARATOR + 
view.getName().getString());
// add the index table with a new name so that it does not 
conflict with the existing index table
// also set update cache frequency to never since the renamed 
index is not present on the server
indexesToAdd.add(PTableImpl.makePTable(index, 
modifiedIndexName, viewStatement, Long.MAX_VALUE, view.getTenantId()));
 {code}


> Looking up a parent index table of a tenant child view fails in 
> BaseColumnResolver createTableRef()
> ---
>
> Key: PHOENIX-4634
> URL: https://issues.apache.org/jira/browse/PHOENIX-4634
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4634-4.x-HBase-0.98.patch, 
> PHOENIX-4634-v2.patch, PHOENIX-4634-v3.patch, PHOENIX-4634-v4.patch
>
>
> If we are looking up a parent table index of a child view , we need to 
> resolve the view which will load the parent table indexes (instead of trying 
> to resolve the parent table index directly). 
>  
> {code:java}
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=Schema.Schema.Index#Schma.View
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:577)
> at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:391)
> at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:228)
> at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:206)
> at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:226)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:146)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:103)
> at org.apache.phoenix.compile.DeleteCompiler.compile(DeleteCompiler.java:501)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:770)
> at 
> 

Re: [VOTE] Include python-phoenixdb into Phoenix

2018-03-09 Thread ethan wang
+1


On March 9, 2018 at 4:42:50 AM, rajeshb...@apache.org 
(chrajeshbab...@gmail.com) wrote:

+1 

On Thu, Mar 8, 2018 at 9:36 PM, Josh Elser  wrote: 

> +1 
> 
> 
> On 3/8/18 7:02 AM, Ankit Singhal wrote: 
> 
>> Lukas Lalinsky has graciously agreed to contribute his work on python 
>> library for accessing Phoenix to Apache Phoenix project. 
>> 
>> 
>> The details of the project can be viewed at:- 
>> 
>> http://python-phoenixdb.readthedocs.io/en/latest/ 
>> 
>> 
>> Code of the project:- 
>> 
>> https://github.com/lalinsky/python-phoenixdb 
>> 
>> 
>> Our last discussion on the same:- 
>> 
>> https://www.mail-archive.com/dev@phoenix.apache.org/msg45424.html 
>> 
>> 
>> The IP clearance steps carried out for this code can be viewed at:- 
>> 
>> https://issues.apache.org/jira/browse/PHOENIX-4636 
>> 
>> 
>> Please vote to accept this code into the project: 
>> 
>> 
>> +1 Accept the code donation 
>> 
>> -1 Do not accept the code donation because... 
>> 
>> 
>> Please note that this vote is to ensure that the PMC has carried out the 
>> necessary legal oversights on the incoming code and fulfilled their legal 
>> obligations to the ASF so only PMC votes are binding. 
>> 
>> 
>> However, community members are still welcome to review the code and we 
>> look 
>> forward to working with the community to develop this code further in the 
>> future. 
>> 
>> 
>> Thanks, 
>> Ankit Singhal 
>> 
>> 


[jira] [Updated] (PHOENIX-4634) Looking up a parent index table of a tenant child view fails in BaseColumnResolver createTableRef()

2018-03-09 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4634:
--
Attachment: PHOENIX-4634-v4.patch

> Looking up a parent index table of a tenant child view fails in 
> BaseColumnResolver createTableRef()
> ---
>
> Key: PHOENIX-4634
> URL: https://issues.apache.org/jira/browse/PHOENIX-4634
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4634-4.x-HBase-0.98.patch, 
> PHOENIX-4634-v2.patch, PHOENIX-4634-v3.patch, PHOENIX-4634-v4.patch
>
>
> If we are looking up a parent table index of a child view , we need to 
> resolve the view which will load the parent table indexes (instead of trying 
> to resolve the parent table index directly). 
>  
> {code:java}
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=Schema.Schema.Index#Schma.View
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:577)
> at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:391)
> at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:228)
> at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:206)
> at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:226)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:146)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:103)
> at org.apache.phoenix.compile.DeleteCompiler.compile(DeleteCompiler.java:501)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:770)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:758)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:386)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4634) Looking up a parent index table of a tenant child view fails in BaseColumnResolver createTableRef()

2018-03-09 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393164#comment-16393164
 ] 

James Taylor commented on PHOENIX-4634:
---

Thanks for the revised patch, [~tdsilva]. Couple of questions/comments:
- I was thinking we could just reset tableName and schemaName at the start of 
the method (and not repeat this code which is done a few lines up). Would that 
work? I supposed we need to remember the original full table name (unless we 
can recreate it easily). See v4 of the patch I attached.
{code}
+boolean resolveInheritedIndex = 
tableName.contains(QueryConstants.CHILD_VIEW_INDEX_NAME_SEPARATOR);
+if (resolveInheritedIndex) {
+String viewName =
+SchemaUtil.getTableNameFromFullName(tableName,
+QueryConstants.CHILD_VIEW_INDEX_NAME_SEPARATOR);
+schemaName = SchemaUtil.getSchemaNameFromFullName(viewName);
+tableName = SchemaUtil.getTableNameFromFullName(viewName);
+try {
+tableRef = connection.getTableRef(new PTableKey(tenantId, 
viewName));
+table = tableRef.getTable();
+tableTimestamp = table.getTimeStamp();
+tableResolvedTimestamp = tableRef.getResolvedTimeStamp();
+} catch (TableNotFoundException e) {
+}
+}
{code}
- The end if block doesn't look right as you'd be resetting the result to have 
a null table so the if would always be false. I think you just need to reset 
result in the catch block. See v4 of the patch I attached.
{code}
+if (resolveInheritedIndex) {
+// reset the result as we looked up the parent view 
+result = new MetaDataMutationResult(MutationCode.TABLE_NOT_FOUND,
+QueryConstants.UNSET_TIMESTAMP, null);
+if (result.getTable() != null) {
+try {
+// look up the inherited index
+tableRef = connection.getTableRef(new PTableKey(tenantId, 
fullTableName));
+table = tableRef.getTable();
+result =
+new 
MetaDataMutationResult(MutationCode.TABLE_ALREADY_EXISTS,
+QueryConstants.UNSET_TIMESTAMP, table);
+} catch (TableNotFoundException e) {
+}
+}
+}
{code}
- Higher level question: why/when is this issue occurring? Is it only from the 
PhoenixRuntime.getTable() call? Or more broadly? If the former, then is it 
better to fix this more locally in PhoenixRuntime (as it seems you originally 
tried to do)? Is there a better, cleaner fix as I can't help but feel like this 
is a hack (which is fine for now).

> Looking up a parent index table of a tenant child view fails in 
> BaseColumnResolver createTableRef()
> ---
>
> Key: PHOENIX-4634
> URL: https://issues.apache.org/jira/browse/PHOENIX-4634
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4634-4.x-HBase-0.98.patch, 
> PHOENIX-4634-v2.patch, PHOENIX-4634-v3.patch
>
>
> If we are looking up a parent table index of a child view , we need to 
> resolve the view which will load the parent table indexes (instead of trying 
> to resolve the parent table index directly). 
>  
> {code:java}
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=Schema.Schema.Index#Schma.View
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:577)
> at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:391)
> at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:228)
> at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:206)
> at org.apache.phoenix.optimize.QueryOptimizer.addPlan(QueryOptimizer.java:226)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:146)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:103)
> at org.apache.phoenix.compile.DeleteCompiler.compile(DeleteCompiler.java:501)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:770)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:758)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:386)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
> {code}



--
This message was sent by Atlassian JIRA

[jira] [Commented] (PHOENIX-4579) Add a config to conditionally create Phoenix meta tables on first client connection

2018-03-09 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16393088#comment-16393088
 ] 

James Taylor commented on PHOENIX-4579:
---

Can I assign this one to you, [~tdsilva] as it doesn't seem that Mujtaba is 
going to get back to it? There's also PHOENIX-4575 which I believe is ready to 
be committed (but requires this JIRA too).

> Add a config to conditionally create Phoenix meta tables on first client 
> connection
> ---
>
> Key: PHOENIX-4579
> URL: https://issues.apache.org/jira/browse/PHOENIX-4579
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Mujtaba Chohan
>Assignee: Mujtaba Chohan
>Priority: Major
> Attachments: PHOENIX-4579.patch
>
>
> Currently we create/modify Phoenix meta tables on first client connection. 
> Adding a property to make it configurable (with default true as it is 
> currently implemented).
> With this property set to false, it will avoid lockstep upgrade requirement 
> for all clients when changing meta properties using PHOENIX-4575 as this 
> property can be flipped back on once all the clients are upgraded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Include python-phoenixdb into Phoenix

2018-03-09 Thread rajeshb...@apache.org
+1

On Thu, Mar 8, 2018 at 9:36 PM, Josh Elser  wrote:

> +1
>
>
> On 3/8/18 7:02 AM, Ankit Singhal wrote:
>
>> Lukas Lalinsky has graciously agreed to contribute his work on python
>> library for accessing Phoenix to Apache Phoenix project.
>>
>>
>> The details of the project can be viewed at:-
>>
>> http://python-phoenixdb.readthedocs.io/en/latest/
>>
>>
>> Code of the project:-
>>
>> https://github.com/lalinsky/python-phoenixdb
>>
>>
>> Our last discussion on the same:-
>>
>> https://www.mail-archive.com/dev@phoenix.apache.org/msg45424.html
>>
>>
>> The IP clearance steps carried out for this code can be viewed at:-
>>
>> https://issues.apache.org/jira/browse/PHOENIX-4636
>>
>>
>> Please vote to accept this code into the project:
>>
>>
>> +1 Accept the code donation
>>
>> -1 Do not accept the code donation because...
>>
>>
>> Please note that this vote is to ensure that the PMC has carried out the
>> necessary legal oversights on the incoming code and fulfilled their legal
>> obligations to the ASF so only PMC votes are binding.
>>
>>
>> However, community members are still welcome to review the code and we
>> look
>> forward to working with the community to develop this code further in the
>> future.
>>
>>
>> Thanks,
>> Ankit Singhal
>>
>>


[jira] [Updated] (PHOENIX-4647) Column header doesn't handle optional arguments correctly

2018-03-09 Thread Shehzaad Nakhoda (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shehzaad Nakhoda updated PHOENIX-4647:
--
Description: 
SUBSTR(NAME, 1)
being rendered as 
SUBSTR(NAME, 1, )

in things like column headings.

For example:
0: jdbc:phoenix:> create table hello_table (ID DECIMAL PRIMARY KEY, NAME 
VARCHAR);
No rows affected (1.252 seconds)
0: jdbc:phoenix:> upsert into hello_table values(1, 'abc');
1 row affected (0.025 seconds)
0: jdbc:phoenix:> select substr(name, 1) from hello_table;
++
| SUBSTR(NAME, 1, )  |
++
| abc|
++

Looks to me like there's a bug - 
SUBSTR(NAME, 1) should be represented as SUBSTR(NAME, 1) not as SUBSTR(NAME, 1, 
)

  was:
SUBSTR(NAME, 1)
being rendered as 
SUBSTR(NAME, 1, 1)

in things like column headings.

For example:
0: jdbc:phoenix:> create table hello_table (ID DECIMAL PRIMARY KEY, NAME 
VARCHAR);
No rows affected (1.252 seconds)
0: jdbc:phoenix:> upsert into hello_table values(1, 'abc');
1 row affected (0.025 seconds)
0: jdbc:phoenix:> select substr(name, 1) from hello_table;
++
| SUBSTR(NAME, 1, )  |
++
| abc|
++

Looks to me like there's a bug - 
SUBSTR(NAME, 1) should be represented as SUBSTR(NAME, 1) not as SUBSTR(NAME, 1, 
)


> Column header doesn't handle optional arguments correctly
> -
>
> Key: PHOENIX-4647
> URL: https://issues.apache.org/jira/browse/PHOENIX-4647
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Shehzaad Nakhoda
>Priority: Major
>
> SUBSTR(NAME, 1)
> being rendered as 
> SUBSTR(NAME, 1, )
> in things like column headings.
> For example:
> 0: jdbc:phoenix:> create table hello_table (ID DECIMAL PRIMARY KEY, NAME 
> VARCHAR);
> No rows affected (1.252 seconds)
> 0: jdbc:phoenix:> upsert into hello_table values(1, 'abc');
> 1 row affected (0.025 seconds)
> 0: jdbc:phoenix:> select substr(name, 1) from hello_table;
> ++
> | SUBSTR(NAME, 1, )  |
> ++
> | abc|
> ++
> Looks to me like there's a bug - 
> SUBSTR(NAME, 1) should be represented as SUBSTR(NAME, 1) not as SUBSTR(NAME, 
> 1, )



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4418) UPPER() and LOWER() functions should be locale-aware

2018-03-09 Thread Shehzaad Nakhoda (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392673#comment-16392673
 ] 

Shehzaad Nakhoda commented on PHOENIX-4418:
---

[~tdsilva] thanks.

I've attached a patch file for the doc changes (on top of 
https://svn.apache.org/repos/asf/phoenix)

> UPPER() and LOWER() functions should be locale-aware
> 
>
> Key: PHOENIX-4418
> URL: https://issues.apache.org/jira/browse/PHOENIX-4418
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.0
>Reporter: Shehzaad Nakhoda
>Assignee: Shehzaad Nakhoda
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4418_v1.patch, PHOENIX-4418_v2.patch, 
> upper_lower_locale_doc.patch
>
>
> Correct conversion of a string to upper or lower case depends on Locale.
> Java's upper case and lower case conversion routines allow passing in a 
> locale.
> It should be possible to pass in a locale to UPPER() and LOWER() in Phoenix 
> so that locale-specific case conversion can be supported in Phoenix.
> See java.lang.String#toUpperCase()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4418) UPPER() and LOWER() functions should be locale-aware

2018-03-09 Thread Shehzaad Nakhoda (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shehzaad Nakhoda updated PHOENIX-4418:
--
Attachment: upper_lower_locale_doc.patch

> UPPER() and LOWER() functions should be locale-aware
> 
>
> Key: PHOENIX-4418
> URL: https://issues.apache.org/jira/browse/PHOENIX-4418
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.0
>Reporter: Shehzaad Nakhoda
>Assignee: Shehzaad Nakhoda
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4418_v1.patch, PHOENIX-4418_v2.patch, 
> upper_lower_locale_doc.patch
>
>
> Correct conversion of a string to upper or lower case depends on Locale.
> Java's upper case and lower case conversion routines allow passing in a 
> locale.
> It should be possible to pass in a locale to UPPER() and LOWER() in Phoenix 
> so that locale-specific case conversion can be supported in Phoenix.
> See java.lang.String#toUpperCase()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4647) Column header doesn't handle optional arguments correctly

2018-03-09 Thread Shehzaad Nakhoda (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16392580#comment-16392580
 ] 

Shehzaad Nakhoda commented on PHOENIX-4647:
---

[~jamestaylor] - the jira you requested.

> Column header doesn't handle optional arguments correctly
> -
>
> Key: PHOENIX-4647
> URL: https://issues.apache.org/jira/browse/PHOENIX-4647
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Shehzaad Nakhoda
>Priority: Major
>
> SUBSTR(NAME, 1)
> being rendered as 
> SUBSTR(NAME, 1, 1)
> in things like column headings.
> For example:
> 0: jdbc:phoenix:> create table hello_table (ID DECIMAL PRIMARY KEY, NAME 
> VARCHAR);
> No rows affected (1.252 seconds)
> 0: jdbc:phoenix:> upsert into hello_table values(1, 'abc');
> 1 row affected (0.025 seconds)
> 0: jdbc:phoenix:> select substr(name, 1) from hello_table;
> ++
> | SUBSTR(NAME, 1, )  |
> ++
> | abc|
> ++
> Looks to me like there's a bug - 
> SUBSTR(NAME, 1) should be represented as SUBSTR(NAME, 1) not as SUBSTR(NAME, 
> 1, )



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4647) Column header doesn't handle optional arguments correctly

2018-03-09 Thread Shehzaad Nakhoda (JIRA)
Shehzaad Nakhoda created PHOENIX-4647:
-

 Summary: Column header doesn't handle optional arguments correctly
 Key: PHOENIX-4647
 URL: https://issues.apache.org/jira/browse/PHOENIX-4647
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0
Reporter: Shehzaad Nakhoda


SUBSTR(NAME, 1)
being rendered as 
SUBSTR(NAME, 1, 1)

in things like column headings.

For example:
0: jdbc:phoenix:> create table hello_table (ID DECIMAL PRIMARY KEY, NAME 
VARCHAR);
No rows affected (1.252 seconds)
0: jdbc:phoenix:> upsert into hello_table values(1, 'abc');
1 row affected (0.025 seconds)
0: jdbc:phoenix:> select substr(name, 1) from hello_table;
++
| SUBSTR(NAME, 1, )  |
++
| abc|
++

Looks to me like there's a bug - 
SUBSTR(NAME, 1) should be represented as SUBSTR(NAME, 1) not as SUBSTR(NAME, 1, 
)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)