[jira] [Updated] (PHOENIX-4546) Projected table cannot be read through ProjectedColumnExpression

2018-02-02 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4546:
---
Attachment: PHOENIX-4546_v3.patch

> Projected table cannot be read through ProjectedColumnExpression
> 
>
> Key: PHOENIX-4546
> URL: https://issues.apache.org/jira/browse/PHOENIX-4546
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Romil Choksi
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 5.0.0, 4.14.0
>
> Attachments: PHOENIX-4546.patch, PHOENIX-4546_v1.patch, 
> PHOENIX-4546_v2.patch, PHOENIX-4546_v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4490) Phoenix Spark Module doesn't pass in user properties to create connection

2018-02-02 Thread Josh Mahonin (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350498#comment-16350498
 ] 

Josh Mahonin commented on PHOENIX-4490:
---

FWIW, I think there should be a more elegant solution here. It would be nice if 
these sorts of parameters could be passed in as options to the Dataframe / 
Dataset builder, and then carried forward as needed.

As I recall, the Configuration object itself is _not_ Serializable, which is a 
big challenge for Spark, and why it gets re-created several times within the 
phoenix-spark module. Perhaps there's another solution for that problem we 
could leverage?

Glad there's a workaround, but if anyone has time for a patch to the underlying 
issue, that would be fantastic!

> Phoenix Spark Module doesn't pass in user properties to create connection
> -
>
> Key: PHOENIX-4490
> URL: https://issues.apache.org/jira/browse/PHOENIX-4490
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Karan Mehta
>Priority: Major
>
> Phoenix Spark module doesn't work perfectly in a Kerberos environment. This 
> is because whenever new {{PhoenixRDD}} are built, they are always built with 
> new and default properties. The following piece of code in 
> {{PhoenixRelation}} is an example. This is the class used by spark to create 
> {{BaseRelation}} before executing a scan. 
> {code}
> new PhoenixRDD(
>   sqlContext.sparkContext,
>   tableName,
>   requiredColumns,
>   Some(buildFilter(filters)),
>   Some(zkUrl),
>   new Configuration(),
>   dateAsTimestamp
> ).toDataFrame(sqlContext).rdd
> {code}
> This would work fine in most cases if the spark code is being run on the same 
> cluster as HBase, the config object will pickup properties from Class path 
> xml files. However in an external environment we should use the user provided 
> properties and merge them before creating any {{PhoenixRelation}} or 
> {{PhoenixRDD}}. As per my understanding, we should ideally provide properties 
> in {{DefaultSource#createRelation() method}}.
> An example of when this fails is, Spark tries to get the splits to optimize 
> the MR performance for loading data in the table in 
> {{PhoenixInputFormat#generateSplits()}} methods. Ideally, it should get all 
> the config parameters from the {{JobContext}} being passed, but it is 
> defaulted to {{new Configuration()}}, irrespective of what user passes in. 
> Thus it fails to create a connection.
> [~jmahonin] [~maghamraviki...@gmail.com] 
> Any ideas or advice? Let me know if I am missing anything obvious here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4546) Projected table cannot be read through ProjectedColumnExpression

2018-02-02 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350238#comment-16350238
 ] 

Ankit Singhal commented on PHOENIX-4546:


Thanks [~sergey.soldatov], The root cause of above test failure is different 
except for the one.

{code}
[ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.514 s 
<<< FAILURE! - in org.apache.phoenix.end2end.DynamicFamilyIT
{code}

I have fixed the same, now should it be good to go?

> Projected table cannot be read through ProjectedColumnExpression
> 
>
> Key: PHOENIX-4546
> URL: https://issues.apache.org/jira/browse/PHOENIX-4546
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Romil Choksi
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 5.0.0, 4.14.0
>
> Attachments: PHOENIX-4546.patch, PHOENIX-4546_v1.patch, 
> PHOENIX-4546_v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[VOTE] Apache Phoenix 5.0.0-alpha for HBase 2.0 rc0

2018-02-02 Thread Josh Elser

Hello Everyone,

This is a call for a vote on Apache Phoenix 5.0.0-alpha rc0. This 
release is only targeting Apache HBase 2.0 and is known to have lacking 
functionality as compared to previous releases (e.g. transactional 
tables, Hive integration, full local indexing support). It is presented 
as-is in an attempt to encourage the community at large to get involved.


The RC is available at the standard location:

https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-5.0.0-alpha-HBase-2.0-rc0

RC0 is based on the following commit: 
a2053c1d3b64a9cc2f35b1f83faa54e421bb20f1


Signed with my key: 9E62822F4668F17B0972ADD9B7D5CD454677D66C, 
http://pgp.mit.edu/pks/lookup?op=get=0xB7D5CD454677D66C


Vote will be open for at least 72 hours (2018/02/05 1600GMT). Please vote:

[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Thanks,
The Apache Phoenix Team


[jira] [Created] (PHOENIX-4578) Potential HBase/ZK Connection leaks in phoenix-hive module

2018-02-02 Thread Karan Mehta (JIRA)
Karan Mehta created PHOENIX-4578:


 Summary: Potential HBase/ZK Connection leaks in phoenix-hive module
 Key: PHOENIX-4578
 URL: https://issues.apache.org/jira/browse/PHOENIX-4578
 Project: Phoenix
  Issue Type: Bug
Reporter: Karan Mehta


This issue is similar to PHOENIX-4489 and PHOENIX-4503. {{HConnection}} objects 
are not closed in {{PhoenixInputFormat#generateSplits()}} method. They can be 
cleaned up by GC, however if multiple objects are invoked in a short span of 
time, it can result in leaks and timeouts.

Similar issue is found in phoenix-hive module as pointed out by [~jmahonin].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4546) Projected table cannot be read through ProjectedColumnExpression

2018-02-02 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350553#comment-16350553
 ] 

James Taylor commented on PHOENIX-4546:
---

Is this an issue for 4.x too? Not sure I understand the problem, though. Is it 
specific to local indexes?

> Projected table cannot be read through ProjectedColumnExpression
> 
>
> Key: PHOENIX-4546
> URL: https://issues.apache.org/jira/browse/PHOENIX-4546
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Romil Choksi
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 5.0.0, 4.14.0
>
> Attachments: PHOENIX-4546.patch, PHOENIX-4546_v1.patch, 
> PHOENIX-4546_v2.patch, PHOENIX-4546_v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4490) Phoenix Spark Module doesn't pass in user properties to create connection

2018-02-02 Thread Karan Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350718#comment-16350718
 ] 

Karan Mehta commented on PHOENIX-4490:
--

Thanks [~highfei2...@126.com] for the work around. However the problem that I 
want to highlight is different. If the property files are available on class 
path, then it can be picked up by 

{{HBaseConfiguration.create()}}. We still need your work around for adding 
krb5.conf and keytab files for secure connections. However, in our use-case, 
all these properties are generated as a part of code and they get passed around 
everywhere. The problem here is that the phoenix-spark module ignore those 
properties and creates a new {{Configuration}} object every time.

[~jmahonin] Can you throw some more light on the 

bq. Configuration object itself is not Serializable thing? 

We are not sending over the properties over the wire and most of these 
properties are only required for establishing Kerberos secured connections. 

> Phoenix Spark Module doesn't pass in user properties to create connection
> -
>
> Key: PHOENIX-4490
> URL: https://issues.apache.org/jira/browse/PHOENIX-4490
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Karan Mehta
>Priority: Major
>
> Phoenix Spark module doesn't work perfectly in a Kerberos environment. This 
> is because whenever new {{PhoenixRDD}} are built, they are always built with 
> new and default properties. The following piece of code in 
> {{PhoenixRelation}} is an example. This is the class used by spark to create 
> {{BaseRelation}} before executing a scan. 
> {code}
> new PhoenixRDD(
>   sqlContext.sparkContext,
>   tableName,
>   requiredColumns,
>   Some(buildFilter(filters)),
>   Some(zkUrl),
>   new Configuration(),
>   dateAsTimestamp
> ).toDataFrame(sqlContext).rdd
> {code}
> This would work fine in most cases if the spark code is being run on the same 
> cluster as HBase, the config object will pickup properties from Class path 
> xml files. However in an external environment we should use the user provided 
> properties and merge them before creating any {{PhoenixRelation}} or 
> {{PhoenixRDD}}. As per my understanding, we should ideally provide properties 
> in {{DefaultSource#createRelation() method}}.
> An example of when this fails is, Spark tries to get the splits to optimize 
> the MR performance for loading data in the table in 
> {{PhoenixInputFormat#generateSplits()}} methods. Ideally, it should get all 
> the config parameters from the {{JobContext}} being passed, but it is 
> defaulted to {{new Configuration()}}, irrespective of what user passes in. 
> Thus it fails to create a connection.
> [~jmahonin] [~maghamraviki...@gmail.com] 
> Any ideas or advice? Let me know if I am missing anything obvious here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Phoenix's dist.a.o limit is 400MB

2018-02-02 Thread Josh Elser

https://issues.apache.org/jira/browse/INFRA-15971

The 5.0.0-alpha rc0 crushed this limit with a whopping 410MB 
bin-tarball. INFRA temporarily increased this limit for me to commit the 
RC to the staging area, but I've asked them to increase our limit to 500MB.


This is probably a good time for us to try to trim some fat, e.g. the 
phoenix-pig jar is larger than the phoenix-client jar.


[jira] [Commented] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350909#comment-16350909
 ] 

ASF GitHub Bot commented on PHOENIX-4231:
-

GitHub user ChinmaySKulkarni opened a pull request:

https://github.com/apache/phoenix/pull/292

PHOENIX-4231: Support restriction of remote UDF load sources

- Added feature to be able to add jars from an HDFS URI.
- Restrict loading of jars to be only from the hbase.dynamic.jars.dir
directory.

Testing done:
- Tested that the user is able to add jars from an HDFS URI reachable on the
network as well as local filesystem.
- Tested that the user is unable to create a function where the jar is
loaded from any directory apart from hbase.dynamic.jars.dir.
- Tested that HDFS URIs without scheme and authority work.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ChinmaySKulkarni/phoenix PHOENIX-4231

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/292.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #292


commit 9351e1b4d4a5741bbf063f0d6cd31f14cfa1e6b6
Author: Chinmay Kulkarni 
Date:   2018-02-02T20:16:47Z

PHOENIX-4231: Support restriction of remote UDF load sources

- Added feature to be able to add jars from an HDFS URI.
- Restrict loading of jars to be only from the hbase.dynamic.jars.dir
directory.

Testing done:
- Tested that the user is able to add jars from an HDFS URI reachable on the
network as well as local filesystem.
- Tested that the user is unable to create a function where the jar is
loaded from any directory apart from hbase.dynamic.jars.dir.
- Tested that HDFS URIs without scheme and authority work.




> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] phoenix pull request #292: PHOENIX-4231: Support restriction of remote UDF l...

2018-02-02 Thread ChinmaySKulkarni
GitHub user ChinmaySKulkarni opened a pull request:

https://github.com/apache/phoenix/pull/292

PHOENIX-4231: Support restriction of remote UDF load sources

- Added feature to be able to add jars from an HDFS URI.
- Restrict loading of jars to be only from the hbase.dynamic.jars.dir
directory.

Testing done:
- Tested that the user is able to add jars from an HDFS URI reachable on the
network as well as local filesystem.
- Tested that the user is unable to create a function where the jar is
loaded from any directory apart from hbase.dynamic.jars.dir.
- Tested that HDFS URIs without scheme and authority work.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ChinmaySKulkarni/phoenix PHOENIX-4231

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/292.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #292


commit 9351e1b4d4a5741bbf063f0d6cd31f14cfa1e6b6
Author: Chinmay Kulkarni 
Date:   2018-02-02T20:16:47Z

PHOENIX-4231: Support restriction of remote UDF load sources

- Added feature to be able to add jars from an HDFS URI.
- Restrict loading of jars to be only from the hbase.dynamic.jars.dir
directory.

Testing done:
- Tested that the user is able to add jars from an HDFS URI reachable on the
network as well as local filesystem.
- Tested that the user is unable to create a function where the jar is
loaded from any directory apart from hbase.dynamic.jars.dir.
- Tested that HDFS URIs without scheme and authority work.




---


[GitHub] phoenix issue #292: PHOENIX-4231: Support restriction of remote UDF load sou...

2018-02-02 Thread ChinmaySKulkarni
Github user ChinmaySKulkarni commented on the issue:

https://github.com/apache/phoenix/pull/292
  
@apurtell @twdsilva @jtaylor-sfdc please review. Thanks.


---


[jira] [Commented] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350953#comment-16350953
 ] 

ASF GitHub Bot commented on PHOENIX-4231:
-

Github user JamesRTaylor commented on the issue:

https://github.com/apache/phoenix/pull/292
  
Thanks for the patch, @ChinmaySKulkarni. Best person to review is 
@chrajeshbabu who originally added support for UDFs.


> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] phoenix issue #292: PHOENIX-4231: Support restriction of remote UDF load sou...

2018-02-02 Thread JamesRTaylor
Github user JamesRTaylor commented on the issue:

https://github.com/apache/phoenix/pull/292
  
Thanks for the patch, @ChinmaySKulkarni. Best person to review is 
@chrajeshbabu who originally added support for UDFs.


---


[jira] [Created] (PHOENIX-4579) Add a config to conditionally create Phoenix meta tables on first client connection

2018-02-02 Thread Mujtaba Chohan (JIRA)
Mujtaba Chohan created PHOENIX-4579:
---

 Summary: Add a config to conditionally create Phoenix meta tables 
on first client connection
 Key: PHOENIX-4579
 URL: https://issues.apache.org/jira/browse/PHOENIX-4579
 Project: Phoenix
  Issue Type: New Feature
Reporter: Mujtaba Chohan


Currently we create/modify Phoenix meta tables on first client connection. 
Adding a property to make it configurable (with default true as it is currently 
implemented).

With this property set to false, it will avoid lockstep upgrade requirement for 
all clients when changing meta properties using PHOENIX-4575 as this property 
can be flipped back on once all the clients are upgraded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3941) Filter regions to scan for local indexes based on data table leading pk filter conditions

2018-02-02 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3941:
--
Attachment: PHOENIX-3941_v2.patch

> Filter regions to scan for local indexes based on data table leading pk 
> filter conditions
> -
>
> Key: PHOENIX-3941
> URL: https://issues.apache.org/jira/browse/PHOENIX-3941
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
>  Labels: SFDC, localIndex
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3941_v1.patch, PHOENIX-3941_v2.patch
>
>
> Had a good offline conversation with [~ndimiduk] at PhoenixCon about local 
> indexes. Depending on the query, we can often times prune the regions we need 
> to scan over based on the where conditions against the data table pk. For 
> example, with a multi-tenant table, we only need to scan the regions that are 
> prefixed by the tenant ID.
> We can easily get this information from the compilation of the query against 
> the data table (which we always do), through the 
> statementContext.getScanRanges() structure. We'd just want to keep a pointer 
> to the data table QueryPlan from the local index QueryPlan.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4231) Support restriction of remote UDF load sources

2018-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350911#comment-16350911
 ] 

ASF GitHub Bot commented on PHOENIX-4231:
-

Github user ChinmaySKulkarni commented on the issue:

https://github.com/apache/phoenix/pull/292
  
@apurtell @twdsilva @jtaylor-sfdc please review. Thanks.


> Support restriction of remote UDF load sources 
> ---
>
> Key: PHOENIX-4231
> URL: https://issues.apache.org/jira/browse/PHOENIX-4231
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>Priority: Major
>
> When allowUserDefinedFunctions is true, users can load UDFs remotely via a 
> jar file from any HDFS filesystem reachable on the network. The setting 
> hbase.dynamic.jars.dir can be used to restrict locations for jar loading but 
> is only applied to jars loaded from the local filesystem.  We should 
> implement support for similar restriction via configuration for jars loaded 
> via hdfs:// URIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Apache Phoenix 5.0.0-alpha for HBase 2.0 rc0

2018-02-02 Thread Artem Ervits
-1 (nonbinding)

md5 binary: OK

Centos 7.4
jdk: 1.8.0_161
hadoop 2.7.5
hbase 2.0

ran example
$PHOENIX_HOME/bin/psql.py localhost:2181:/hbase-unsecure
$PHOENIX_HOME/examples/WEB_STAT.sql $PHOENIX_HOME/examples/WEB_STAT.csv
$PHOENIX_HOME/examples/WEB_STAT_QUERIES.sql

OK

ran example in https://phoenix.apache.org/Phoenix-in-15-minutes-or-less.html

OK

tried to run performance.py script

getting the same results as in
https://issues.apache.org/jira/browse/PHOENIX-4510

[vagrant@hadoop ~]$ $PHOENIX_HOME/bin/performance.py
localhost:2181:/hbase-unsecure 1000
Phoenix Performance Evaluation Script 1.0
-

Creating performance table...
18/02/03 04:23:48 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
no rows upserted
Time: 4.59 sec(s)

Query # 1 - Count - SELECT COUNT(1) FROM PERFORMANCE_1000;
Query # 2 - Group By First PK - SELECT HOST FROM PERFORMANCE_1000 GROUP BY
HOST;
Query # 3 - Group By Second PK - SELECT DOMAIN FROM PERFORMANCE_1000 GROUP
BY DOMAIN;
Query # 4 - Truncate + Group By - SELECT TRUNC(DATE,'DAY') DAY FROM
PERFORMANCE_1000 GROUP BY TRUNC(DATE,'DAY');
Query # 5 - Filter + Count - SELECT COUNT(1) FROM PERFORMANCE_1000 WHERE
CORE<10;

Generating and upserting data...
Error: Invalid or corrupt jarfile /tmp/data_akuell.csv


NOT OK



sqlline still shows version 4.13

Connected to: Phoenix (version 4.13)
Driver: PhoenixEmbeddedDriver (version 4.13)

NOT OK

On Fri, Feb 2, 2018 at 10:58 AM, Josh Elser  wrote:

> Hello Everyone,
>
> This is a call for a vote on Apache Phoenix 5.0.0-alpha rc0. This release
> is only targeting Apache HBase 2.0 and is known to have lacking
> functionality as compared to previous releases (e.g. transactional tables,
> Hive integration, full local indexing support). It is presented as-is in an
> attempt to encourage the community at large to get involved.
>
> The RC is available at the standard location:
>
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoeni
> x-5.0.0-alpha-HBase-2.0-rc0
>
> RC0 is based on the following commit: a2053c1d3b64a9cc2f35b1f83faa54
> e421bb20f1
>
> Signed with my key: 9E62822F4668F17B0972ADD9B7D5CD454677D66C,
> http://pgp.mit.edu/pks/lookup?op=get=0xB7D5CD454677D66C
>
> Vote will be open for at least 72 hours (2018/02/05 1600GMT). Please vote:
>
> [ ] +1 approve
> [ ] +0 no opinion
> [ ] -1 disapprove (and reason why)
>
> Thanks,
> The Apache Phoenix Team
>


[jira] [Comment Edited] (PHOENIX-3571) Potential divide by zero exception in LongDivideExpression

2018-02-02 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954966#comment-15954966
 ] 

Ted Yu edited comment on PHOENIX-3571 at 2/2/18 7:04 PM:
-

Assertion for zero denominator is fine. 


was (Author: yuzhih...@gmail.com):
Assertion for zero denominator is fine . 

> Potential divide by zero exception in LongDivideExpression
> --
>
> Key: PHOENIX-3571
> URL: https://issues.apache.org/jira/browse/PHOENIX-3571
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Ted Yu
>Priority: Minor
>
> Running SaltedIndexIT, I saw the following:
> {code}
> ===> 
> testExpressionThrowsException(org.apache.phoenix.end2end.index.IndexExpressionIT)
>  starts
> 2017-01-05 19:42:48,992 INFO  [main] client.HBaseAdmin: Created I
> 2017-01-05 19:42:48,996 INFO  [main] schema.MetaDataClient: Created index I 
> at 1483645369000
> 2017-01-05 19:42:49,066 WARN  [hconnection-0x5a45c218-shared--pool52-t6] 
> client.AsyncProcess: #38, table=T, attempt=1/35 failed=1ops, last exception: 
> org.apache.phoenix.hbase.index.builder.IndexBuildingFailureException: 
> org.apache.phoenix.hbase.index.builder.IndexBuildingFailureException: Failed 
> to build index for unexpected reason!
>   at 
> org.apache.phoenix.hbase.index.util.IndexManagementUtil.rethrowIndexingException(IndexManagementUtil.java:183)
>   at org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:204)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:974)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1660)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1734)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1692)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:970)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3218)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2984)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2926)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:718)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:680)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2065)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32393)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2141)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:238)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:218)
> Caused by: java.lang.ArithmeticException: / by zero
>   at 
> org.apache.phoenix.expression.LongDivideExpression.evaluate(LongDivideExpression.java:50)
>   at 
> org.apache.phoenix.index.IndexMaintainer.buildRowKey(IndexMaintainer.java:521)
>   at 
> org.apache.phoenix.index.IndexMaintainer.buildUpdateMutation(IndexMaintainer.java:859)
>   at 
> org.apache.phoenix.index.PhoenixIndexCodec.getIndexUpserts(PhoenixIndexCodec.java:76)
>   at 
> org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder.addCurrentStateMutationsForBatch(NonTxIndexBuilder.java:288)
>   at 
> org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder.addUpdateForGivenTimestamp(NonTxIndexBuilder.java:256)
>   at 
> org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder.addMutationsForBatch(NonTxIndexBuilder.java:222)
>   at 
> org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder.batchMutationAndAddUpdates(NonTxIndexBuilder.java:109)
>   at 
> org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder.getIndexUpdate(NonTxIndexBuilder.java:71)
>   at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager$1.call(IndexBuildManager.java:136)
>   at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager$1.call(IndexBuildManager.java:132)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:253)
>   at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
>   at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:58)
>   at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99)
>   at 
> 

[jira] [Commented] (PHOENIX-3941) Filter regions to scan for local indexes based on data table leading pk filter conditions

2018-02-02 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350902#comment-16350902
 ] 

James Taylor commented on PHOENIX-3941:
---

[~maryannxue] - would you mind taking a look at this v2 patch? I'm trying to 
keep the data plan with the query plan used for an index (so we can potentially 
prune local index regions when there are leading PK columns in common between 
the data table and index table). For joins, I'm losing the 
QueryCompiler.dataPlan along the way and I'm not sure how to fix it.

> Filter regions to scan for local indexes based on data table leading pk 
> filter conditions
> -
>
> Key: PHOENIX-3941
> URL: https://issues.apache.org/jira/browse/PHOENIX-3941
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
>  Labels: SFDC, localIndex
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3941_v1.patch, PHOENIX-3941_v2.patch
>
>
> Had a good offline conversation with [~ndimiduk] at PhoenixCon about local 
> indexes. Depending on the query, we can often times prune the regions we need 
> to scan over based on the where conditions against the data table pk. For 
> example, with a multi-tenant table, we only need to scan the regions that are 
> prefixed by the tenant ID.
> We can easily get this information from the compilation of the query against 
> the data table (which we always do), through the 
> statementContext.getScanRanges() structure. We'd just want to keep a pointer 
> to the data table QueryPlan from the local index QueryPlan.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4577) make-rc.sh fails trying to copy the inline argparse into bin/

2018-02-02 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-4577.
-
Resolution: Fixed

Needed it for the 5.0.0-alpha rc0. Kicked it to the 4.x branches too.

I think it might be an OSX ism (not sure how the 4.13 CDH build worked without 
it).

> make-rc.sh fails trying to copy the inline argparse into bin/
> -
>
> Key: PHOENIX-4577
> URL: https://issues.apache.org/jira/browse/PHOENIX-4577
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 5.0.0, 4.14.0
>
> Attachments: PHOENIX-4577.patch
>
>
> Silly little fix. Need to add a {{-r}} to the {{cp}} call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Anonymous survey: Apache HBase 1.x Usage

2018-02-02 Thread Andrew Purtell
Please take this anonymous survey
​ to ​
let us know what version of Apache HBase 1.x you are using in production
now or are planning to use in production in the next year or so.

Multiple choices are allowed.

​There is no "I'm not using 1.x" choice. Consider upgrading! (smile)

https://www.surveymonkey.com/r/8WQ8QY6


-- 
Best regards,
Andrew


[jira] [Created] (PHOENIX-4580) Upgrade to Tephra 0.14.0-incubating for HBase 2.0 support

2018-02-02 Thread Josh Elser (JIRA)
Josh Elser created PHOENIX-4580:
---

 Summary: Upgrade to Tephra  0.14.0-incubating  for HBase 2.0 
support
 Key: PHOENIX-4580
 URL: https://issues.apache.org/jira/browse/PHOENIX-4580
 Project: Phoenix
  Issue Type: Task
Reporter: Josh Elser
Assignee: Ankit Singhal
 Fix For: 5.0.0


TEPHRA-272 has the necessary changes that Phoenix needs but we need to get a 
release from the Tephra folks first.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4574) Disable failing local indexing ITs

2018-02-02 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-4574.
-
Resolution: Fixed

> Disable failing local indexing ITs
> --
>
> Key: PHOENIX-4574
> URL: https://issues.apache.org/jira/browse/PHOENIX-4574
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4574.2.patch, PHOENIX-4574.patch
>
>
> [~rajeshbabu] still has some work ongoing to fix up local indexing for HBase 
> 2.
> Temporarily disable related tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3941) Filter regions to scan for local indexes based on data table leading pk filter conditions

2018-02-02 Thread Maryann Xue (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351017#comment-16351017
 ] 

Maryann Xue commented on PHOENIX-3941:
--

[~jamestaylor], Unlike the single queries, join queries are optimized through 
{{JoinCompiler#optimize()}}, which in turns calls {{QueryOptimizer#optimize()}} 
for each join table to find out a *local* optimal plan. So when it comes the 
time to compile the join query, we are already working on a index-replaced 
query. So a straightforward solution might be to have the 
{{JoinCompiler#optimize()}} method return a map from tableRef to dataPlan which 
QueryCompiler can use later on to fill in the information.

Would you like me to do this part of the job? If yes, would you mind waiting 
for PHOENIX-1556 to get in first?

BTW, finding a local optimal for each of the join tables was the best we could 
do at the time when we started and there was no stats info, but now that we 
have stats and the cost model ready, we are able to find out a global optimal 
plan for join queries. However, the compile time could start to explode as the 
number of join tables or that of the indices for each table go up. And it would 
also require quite an amount of work, but it's something we can keep in mind.

> Filter regions to scan for local indexes based on data table leading pk 
> filter conditions
> -
>
> Key: PHOENIX-3941
> URL: https://issues.apache.org/jira/browse/PHOENIX-3941
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
>  Labels: SFDC, localIndex
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3941_v1.patch, PHOENIX-3941_v2.patch
>
>
> Had a good offline conversation with [~ndimiduk] at PhoenixCon about local 
> indexes. Depending on the query, we can often times prune the regions we need 
> to scan over based on the where conditions against the data table pk. For 
> example, with a multi-tenant table, we only need to scan the regions that are 
> prefixed by the tenant ID.
> We can easily get this information from the compilation of the query against 
> the data table (which we always do), through the 
> statementContext.getScanRanges() structure. We'd just want to keep a pointer 
> to the data table QueryPlan from the local index QueryPlan.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4581) phoenix-spark is not pushing timestamp filter correctly

2018-02-02 Thread Rama Mullapudi (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rama Mullapudi updated PHOENIX-4581:

Description: 
phoenix-spark is not pushing timestamp filter correctly

 

val tblDF = 
spark.sqlContext.read.format("org.apache.phoenix.spark").option("table", 
"ORDER_LINE").option("zkUrl" , 
"host:2181:/hbase-secure").load().filter(col("CREATE_TMS").gt(lit(current_timestamp(.show()

 

Query being sent to phoenix

select  FROM ORDER_LINE WHERE ( "CREATE_TMS" > 2018-02-02 20:06:51.545)

 

As timstamp string does not have quotes query fails to run.

 

 

 

  was:
phoenix-spark is not pushing timestamp filter correctly

 

val tblDF = 
spark.sqlContext.read.format("org.apache.phoenix.spark").option("table", 
"ORDER_LINE").option("zkUrl" , 
"suhadoomgrqa001,suhadoomgrqa002,suhadoomgrqa003:2181:/hbase-secure").load().filter(col("CREATE_TMS").gt(lit(current_timestamp(.show()

 

Query being sent to phoenix

select  FROM ORDER_LINE WHERE ( "CREATE_TMS" > 2018-02-02 20:06:51.545)

 

As timstamp string does not have quotes query fails to run.

 

 

 


> phoenix-spark is not pushing timestamp filter correctly
> ---
>
> Key: PHOENIX-4581
> URL: https://issues.apache.org/jira/browse/PHOENIX-4581
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rama Mullapudi
>Priority: Major
>
> phoenix-spark is not pushing timestamp filter correctly
>  
> val tblDF = 
> spark.sqlContext.read.format("org.apache.phoenix.spark").option("table", 
> "ORDER_LINE").option("zkUrl" , 
> "host:2181:/hbase-secure").load().filter(col("CREATE_TMS").gt(lit(current_timestamp(.show()
>  
> Query being sent to phoenix
> select  FROM ORDER_LINE WHERE ( "CREATE_TMS" > 2018-02-02 20:06:51.545)
>  
> As timstamp string does not have quotes query fails to run.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4581) phoenix-spark is not pushing timestamp filter correctly

2018-02-02 Thread Rama Mullapudi (JIRA)
Rama Mullapudi created PHOENIX-4581:
---

 Summary: phoenix-spark is not pushing timestamp filter correctly
 Key: PHOENIX-4581
 URL: https://issues.apache.org/jira/browse/PHOENIX-4581
 Project: Phoenix
  Issue Type: Bug
Reporter: Rama Mullapudi


phoenix-spark is not pushing timestamp filter correctly

 

val tblDF = 
spark.sqlContext.read.format("org.apache.phoenix.spark").option("table", 
"ORDER_LINE").option("zkUrl" , 
"suhadoomgrqa001,suhadoomgrqa002,suhadoomgrqa003:2181:/hbase-secure").load().filter(col("CREATE_TMS").gt(lit(current_timestamp(.show()

 

Query being sent to phoenix

select  FROM ORDER_LINE WHERE ( "CREATE_TMS" > 2018-02-02 20:06:51.545)

 

As timstamp string does not have quotes query fails to run.

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4581) phoenix-spark is not pushing timestamp filter correctly

2018-02-02 Thread Rama Mullapudi (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rama Mullapudi updated PHOENIX-4581:

Description: 
phoenix-spark is not pushing timestamp filter correctly

 

val tblDF = spark.sqlContext.read.format("org.apache.phoenix.spark")

.option("table", "ORDER_LINE").option("zkUrl" , 
"host:2181:/hbase-secure").load()

.filter(col("CREATE_TMS").gt(lit(current_timestamp(.show()

 

Query being sent to phoenix

select  FROM ORDER_LINE WHERE ( "CREATE_TMS" > 2018-02-02 20:06:51.545)

 

As timstamp string does not have quotes query fails to run.

 

 

 

  was:
phoenix-spark is not pushing timestamp filter correctly

 

val tblDF = 
spark.sqlContext.read.format("org.apache.phoenix.spark").option("table", 
"ORDER_LINE").option("zkUrl" , 
"host:2181:/hbase-secure").load().filter(col("CREATE_TMS").gt(lit(current_timestamp(.show()

 

Query being sent to phoenix

select  FROM ORDER_LINE WHERE ( "CREATE_TMS" > 2018-02-02 20:06:51.545)

 

As timstamp string does not have quotes query fails to run.

 

 

 


> phoenix-spark is not pushing timestamp filter correctly
> ---
>
> Key: PHOENIX-4581
> URL: https://issues.apache.org/jira/browse/PHOENIX-4581
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rama Mullapudi
>Priority: Major
>
> phoenix-spark is not pushing timestamp filter correctly
>  
> val tblDF = spark.sqlContext.read.format("org.apache.phoenix.spark")
> .option("table", "ORDER_LINE").option("zkUrl" , 
> "host:2181:/hbase-secure").load()
> .filter(col("CREATE_TMS").gt(lit(current_timestamp(.show()
>  
> Query being sent to phoenix
> select  FROM ORDER_LINE WHERE ( "CREATE_TMS" > 2018-02-02 20:06:51.545)
>  
> As timstamp string does not have quotes query fails to run.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4423) Phoenix-hive compilation broken on >=Hive 2.3

2018-02-02 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350998#comment-16350998
 ] 

Josh Elser commented on PHOENIX-4423:
-

Need to revert PHOENIX-4573 before resolving this.

> Phoenix-hive compilation broken on >=Hive 2.3
> -
>
> Key: PHOENIX-4423
> URL: https://issues.apache.org/jira/browse/PHOENIX-4423
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4423.002.patch
>
>
> HIVE-15167 removed an interface which we're using in Phoenix which obviously 
> fails compilation. Will need to figure out how to work with Hive 1.x, <2.3.0, 
> and >=2.3.0.
> FYI [~sergey.soldatov]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4497) Fix Local Index IT tests

2018-02-02 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351003#comment-16351003
 ] 

Josh Elser commented on PHOENIX-4497:
-

Need to make sure PHOENIX-4574 gets reverted when all tests are passing.

> Fix Local Index IT tests
> 
>
> Key: PHOENIX-4497
> URL: https://issues.apache.org/jira/browse/PHOENIX-4497
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4533) Phoenix Query Server should not use SPNEGO principal to proxy user requests

2018-02-02 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351008#comment-16351008
 ] 

Josh Elser commented on PHOENIX-4533:
-

Thanks, Lev!

Let me take a look and run through the tests locally.

> Phoenix Query Server should not use SPNEGO principal to proxy user requests
> ---
>
> Key: PHOENIX-4533
> URL: https://issues.apache.org/jira/browse/PHOENIX-4533
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lev Bronshtein
>Assignee: Lev Bronshtein
>Priority: Minor
> Attachments: PHOENIX-4533.1.patch
>
>
> Currently the HTTP/ principal is used by various components in the HADOOP 
> ecosystem to perform SPNEGO authentication.  Since there can only be one 
> HTTP/ per host, even outside of the Hadoop ecosystem, the keytab containing 
> key material for local HTTP/ principal is shared among a few applications.  
> With so many applications having access to the HTTP/ credentials, this 
> increases the chances of an attack on the proxy user capabilities of Hadoop.  
> This JIRA proposes that two different key tabs can be used to
> 1. Authenticate kerberized web requests
> 2. Communicate with the phoenix back end



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4581) phoenix-spark is not pushing timestamp filter correctly

2018-02-02 Thread Rama Mullapudi (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rama Mullapudi updated PHOENIX-4581:

Description: 
phoenix-spark is not pushing timestamp filter correctly

 

val tblDF = spark.sqlContext.read.format("org.apache.phoenix.spark")

.option("table", "tablename").option("zkUrl" , "host:2181:/hbase-secure").load()

.filter(col("CREATE_TMS").gt(lit(current_timestamp(.show()

 

Query being sent to phoenix

select  FROM tablename WHERE ( "CREATE_TMS" > 2018-02-02 20:06:51.545)

 

As timstamp string does not have quotes query fails to run.

 

 

 

  was:
phoenix-spark is not pushing timestamp filter correctly

 

val tblDF = spark.sqlContext.read.format("org.apache.phoenix.spark")

.option("table", "tablename").option("zkUrl" , "host:2181:/hbase-secure").load()

.filter(col("CREATE_TMS").gt(lit(current_timestamp(.show()

 

Query being sent to phoenix

select  FROM ORDER_LINE WHERE ( "CREATE_TMS" > 2018-02-02 20:06:51.545)

 

As timstamp string does not have quotes query fails to run.

 

 

 


> phoenix-spark is not pushing timestamp filter correctly
> ---
>
> Key: PHOENIX-4581
> URL: https://issues.apache.org/jira/browse/PHOENIX-4581
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rama Mullapudi
>Priority: Major
>
> phoenix-spark is not pushing timestamp filter correctly
>  
> val tblDF = spark.sqlContext.read.format("org.apache.phoenix.spark")
> .option("table", "tablename").option("zkUrl" , 
> "host:2181:/hbase-secure").load()
> .filter(col("CREATE_TMS").gt(lit(current_timestamp(.show()
>  
> Query being sent to phoenix
> select  FROM tablename WHERE ( "CREATE_TMS" > 2018-02-02 20:06:51.545)
>  
> As timstamp string does not have quotes query fails to run.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4581) phoenix-spark is not pushing timestamp filter correctly

2018-02-02 Thread Rama Mullapudi (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rama Mullapudi updated PHOENIX-4581:

Description: 
phoenix-spark is not pushing timestamp filter correctly

 

val tblDF = spark.sqlContext.read.format("org.apache.phoenix.spark")

.option("table", "tablename").option("zkUrl" , "host:2181:/hbase-secure").load()

.filter(col("CREATE_TMS").gt(lit(current_timestamp(.show()

 

Query being sent to phoenix

select  FROM ORDER_LINE WHERE ( "CREATE_TMS" > 2018-02-02 20:06:51.545)

 

As timstamp string does not have quotes query fails to run.

 

 

 

  was:
phoenix-spark is not pushing timestamp filter correctly

 

val tblDF = spark.sqlContext.read.format("org.apache.phoenix.spark")

.option("table", "ORDER_LINE").option("zkUrl" , 
"host:2181:/hbase-secure").load()

.filter(col("CREATE_TMS").gt(lit(current_timestamp(.show()

 

Query being sent to phoenix

select  FROM ORDER_LINE WHERE ( "CREATE_TMS" > 2018-02-02 20:06:51.545)

 

As timstamp string does not have quotes query fails to run.

 

 

 


> phoenix-spark is not pushing timestamp filter correctly
> ---
>
> Key: PHOENIX-4581
> URL: https://issues.apache.org/jira/browse/PHOENIX-4581
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rama Mullapudi
>Priority: Major
>
> phoenix-spark is not pushing timestamp filter correctly
>  
> val tblDF = spark.sqlContext.read.format("org.apache.phoenix.spark")
> .option("table", "tablename").option("zkUrl" , 
> "host:2181:/hbase-secure").load()
> .filter(col("CREATE_TMS").gt(lit(current_timestamp(.show()
>  
> Query being sent to phoenix
> select  FROM ORDER_LINE WHERE ( "CREATE_TMS" > 2018-02-02 20:06:51.545)
>  
> As timstamp string does not have quotes query fails to run.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4577) make-rc.sh fails trying to copy the inline argparse into bin/

2018-02-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351193#comment-16351193
 ] 

Hudson commented on PHOENIX-4577:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1925 (See 
[https://builds.apache.org/job/Phoenix-master/1925/])
PHOENIX-4577 Use cp -r for the argparse directory in bin/ (elserj: rev 
4521a2a1cff4edbce1cd6572af61bfbd1c0f0d01)
* (edit) dev/make_rc.sh


> make-rc.sh fails trying to copy the inline argparse into bin/
> -
>
> Key: PHOENIX-4577
> URL: https://issues.apache.org/jira/browse/PHOENIX-4577
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 5.0.0, 4.14.0
>
> Attachments: PHOENIX-4577.patch
>
>
> Silly little fix. Need to add a {{-r}} to the {{cp}} call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)