[jira] [Created] (PHOENIX-4748) Upsert values don't correspond with order of fields

2018-05-21 Thread Jaanai Zhang (JIRA)
Jaanai Zhang created PHOENIX-4748:
-

 Summary: Upsert values don't correspond with order of fields 
 Key: PHOENIX-4748
 URL: https://issues.apache.org/jira/browse/PHOENIX-4748
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.11.0
 Environment: jdk 1.8

hbase 1.1
Reporter: Jaanai Zhang
 Attachments: Screen Shot 2018-05-22 at 11.10.05.png, create_table.sql

When UPSERT was executed, i found the some values are not wrote into appointed 
position of 
 fields。



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4704) Presplit index tables when building asynchronously

2018-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483432#comment-16483432
 ] 

Hudson commented on PHOENIX-4704:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1902 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1902/])
PHOENIX-4704 Presplit index tables when building asynchronously (vincentpoon: 
rev fce9a6712faf8df3117372a6cdf244d420e829d1)
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java


> Presplit index tables when building asynchronously
> --
>
> Key: PHOENIX-4704
> URL: https://issues.apache.org/jira/browse/PHOENIX-4704
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4704.master.v1.patch, 
> PHOENIX-4704.master.v2.patch
>
>
> For large data tables with many regions, if we build the index asynchronously 
> using the IndexTool, the index table will initial face a hotspot as all data 
> region mappers attempt to write to the sole new index region.  This can 
> potentially lead to the index getting disabled if writes to the index table 
> timeout during this hotspotting.
> We can add an optional step (or perhaps activate it based on the count of 
> regions in the data table) to the IndexTool to first do a MR job to gather 
> stats on the indexed column values, and then attempt to presplit the index 
> table before we do the actual index build MR job.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4706) phoenix-core jar bundles dependencies unnecessarily

2018-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483405#comment-16483405
 ] 

Hudson commented on PHOENIX-4706:
-

ABORTED: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #141 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/141/])
PHOENIX-4706 Remove bundling dependencies into phoenix-core (elserj: rev 
eb62c20408f0aadaec3da5e495812d7fc3cb2638)
* (edit) phoenix-core/pom.xml


> phoenix-core jar bundles dependencies unnecessarily
> ---
>
> Key: PHOENIX-4706
> URL: https://issues.apache.org/jira/browse/PHOENIX-4706
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4706.001.patch
>
>
> Got a report from some users about extra dependencies being included inside 
> the phoenix-core jar. I was a little confused about this, but, sure enough, 
> it's happening.
> Seems like this was done a very long time ago, but I'm not sure that it's 
> really something we want to do since there is a dedicated phoenix-client jar 
> now..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (PHOENIX-1567) Publish Phoenix-Client & Phoenix-Server jars into Maven Repo

2018-05-21 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reopened PHOENIX-1567:


copying my comment from PHOENIX-4741 where we had a small discussion about the 
same.

we are shading some libraries in phoenix-client but not publishing the artifact 
to maven. Thus the user relying on fat jar has no way to include it in their 
application pom.

>From above discussion, the conclusion seems to be that phoenix-core as maven 
>dependency should be enough but we don't used to shade our client/servers at 
>that time and there were not many cases of transitive dependencies which will 
>have conflict with the runtime libraries version, so I would suggest we should 
>revisit and start publishing our shaded artifacts as well. Let me know if it's 
>fine, I'll update the release|http://phoenix.apache.org/release.html] 
>accordingly.

cc:[~elserj] ,[~jamestaylor] , [~mujtabachohan], [~ndimiduk] 

> Publish Phoenix-Client & Phoenix-Server jars into Maven Repo
> 
>
> Key: PHOENIX-1567
> URL: https://issues.apache.org/jira/browse/PHOENIX-1567
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Jeffrey Zhong
>Priority: Major
>
> Phoenix doesn't publish Phoenix Client & Server jars into Maven repository. 
> This make things quite hard for down steam projects/applications to use maven 
> to resolve dependencies.
> I tried to modify the pom.xml under phoenix-assembly while it shows the 
> following. 
> {noformat}
> [INFO] Installing 
> /Users/jzhong/work/phoenix_apache/checkins/phoenix/phoenix-assembly/target/phoenix-4.3.0-SNAPSHOT-client.jar
>  
> to 
> /Users/jzhong/.m2/repository/org/apache/phoenix/phoenix-assembly/4.3.0-SNAPSHOT/phoenix-assembly-4.3.0-SNAPSHOT-client.jar
> {noformat}
> Basically the jar published to maven repo will become  
> phoenix-assembly-4.3.0-SNAPSHOT-client.jar or 
> phoenix-assembly-4.3.0-SNAPSHOT-server.jar
> The artifact id "phoenix-assembly" has to be the prefix of the names of jars.
> Therefore, the possible solutions are:
> 1) rename current client & server jar to phoenix-assembly-clinet/server.jar 
> to match the jars published to maven repo.
> 2) rename phoenix-assembly to something more meaningful and rename our client 
> & server jars accordingly
> 3) split phoenix-assembly and move the corresponding artifacts into 
> phoenix-client & phoenix-server folders. Phoenix-assembly will only create 
> tar ball files.
> [~giacomotaylor], [~apurtell] or other maven experts: Any suggestion on this? 
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4666) Add a subquery cache that persists beyond the life of a query

2018-05-21 Thread Marcell Ortutay (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483241#comment-16483241
 ] 

Marcell Ortutay commented on PHOENIX-4666:
--

[~maryannxue] [~jamestaylor] I was hoping to get a bit of guidance on where/how 
to handle the exception. I tried adding an exception handler for 
HashJoinCacheNotFoundException in BaseResultIterators.java, as shown here: 
[https://github.com/apache/phoenix/commit/b336644a37f6c65524ee91a06a6859c0215b08f2#diff-8c3d3f644c66ef36d5bc604f017fabfcR1315]
 , but that doesn't seem to be correct. What I was hoping to do was to re-run 
the entire query with caching disabled for specific cache ID's using the 
override mechanism. What actually happens is, apparently, it tries to iterate 
again using the same query? I'm not entirely sure of this part of the code, but 
that is what seems to be happening.

Is there a good way / place to have it re-run the entire query with the the 
change to the StatementContext?

> Add a subquery cache that persists beyond the life of a query
> -
>
> Key: PHOENIX-4666
> URL: https://issues.apache.org/jira/browse/PHOENIX-4666
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Marcell Ortutay
>Assignee: Marcell Ortutay
>Priority: Major
>
> The user list thread for additional context is here: 
> [https://lists.apache.org/thread.html/e62a6f5d79bdf7cd238ea79aed8886816d21224d12b0f1fe9b6bb075@%3Cuser.phoenix.apache.org%3E]
> 
> A Phoenix query may contain expensive subqueries, and moreover those 
> expensive subqueries may be used across multiple different queries. While 
> whole result caching is possible at the application level, it is not possible 
> to cache subresults in the application. This can cause bad performance for 
> queries in which the subquery is the most expensive part of the query, and 
> the application is powerless to do anything at the query level. It would be 
> good if Phoenix provided a way to cache subquery results, as it would provide 
> a significant performance gain.
> An illustrative example:
>     SELECT * FROM table1 JOIN (SELECT id_1 FROM large_table WHERE x = 10) 
> expensive_result ON table1.id_1 = expensive_result.id_2 AND table1.id_1 = 
> \{id}
> In this case, the subquery "expensive_result" is expensive to compute, but it 
> doesn't change between queries. The rest of the query does because of the 
> \{id} parameter. This means the application can't cache it, but it would be 
> good if there was a way to cache expensive_result.
> Note that there is currently a coprocessor based "server cache", but the data 
> in this "cache" is not persisted across queries. It is deleted after a TTL 
> expires (30sec by default), or when the query completes.
> This is issue is fairly high priority for us at 23andMe and we'd be happy to 
> provide a patch with some guidance from Phoenix maintainers. We are currently 
> putting together a design document for a solution, and we'll post it to this 
> Jira ticket for review in a few days.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4747) UDF's integer parameter doens't accept negative constant.

2018-05-21 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483162#comment-16483162
 ] 

Sergey Soldatov commented on PHOENIX-4747:
--

[~jamestaylor] yeah, there is a workaround by using CAST( -1 as INTEGER). 
Possible we can do it automatically at compile time.

> UDF's integer parameter doens't accept negative constant.
> -
>
> Key: PHOENIX-4747
> URL: https://issues.apache.org/jira/browse/PHOENIX-4747
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Priority: Major
> Attachments: PHOENIX-4747-IT.patch
>
>
> If UDF has an integer parameter and we provide a negative constant it fails 
> with 
> {noformat}
> org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
> Type mismatch. expected: [INTEGER] but was: BIGINT at ADDTIME argument 2
>   at 
> org.apache.phoenix.parse.FunctionParseNode.validateFunctionArguement(FunctionParseNode.java:214)
>   at 
> org.apache.phoenix.parse.FunctionParseNode.validate(FunctionParseNode.java:193)
>   at 
> org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:331)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:700)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:585)
>   at 
> org.apache.phoenix.parse.FunctionParseNode.accept(FunctionParseNode.java:86)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:412)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:561)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:507)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:193)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:153)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:490)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:456)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:302)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:283)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1830)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:813)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> {noformat}
> That happens because negative constants are parsed as integer value * -1L, so 
> the result is long. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4706) phoenix-core jar bundles dependencies unnecessarily

2018-05-21 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-4706.
-
Resolution: Fixed

Thanks James and Mujtaba!

> phoenix-core jar bundles dependencies unnecessarily
> ---
>
> Key: PHOENIX-4706
> URL: https://issues.apache.org/jira/browse/PHOENIX-4706
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4706.001.patch
>
>
> Got a report from some users about extra dependencies being included inside 
> the phoenix-core jar. I was a little confused about this, but, sure enough, 
> it's happening.
> Seems like this was done a very long time ago, but I'm not sure that it's 
> really something we want to do since there is a dedicated phoenix-client jar 
> now..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4747) UDF's integer parameter doens't accept negative constant.

2018-05-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483147#comment-16483147
 ] 

James Taylor commented on PHOENIX-4747:
---

We tried to get the parser to work with negative constants, but it gets 
confused with things like this:
{code:java}
SELECT foo - -1{code}
Instead, we treat 1 as a constant and turn this into a multiple by -1 
expression. This gets evaluated at compile time, so there's no runtime 
overhead. We also typically expand numbers to longs as it makes life easier in 
general. I have a feeling there may be workarounds for this issue.

> UDF's integer parameter doens't accept negative constant.
> -
>
> Key: PHOENIX-4747
> URL: https://issues.apache.org/jira/browse/PHOENIX-4747
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Priority: Major
> Attachments: PHOENIX-4747-IT.patch
>
>
> If UDF has an integer parameter and we provide a negative constant it fails 
> with 
> {noformat}
> org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
> Type mismatch. expected: [INTEGER] but was: BIGINT at ADDTIME argument 2
>   at 
> org.apache.phoenix.parse.FunctionParseNode.validateFunctionArguement(FunctionParseNode.java:214)
>   at 
> org.apache.phoenix.parse.FunctionParseNode.validate(FunctionParseNode.java:193)
>   at 
> org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:331)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:700)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:585)
>   at 
> org.apache.phoenix.parse.FunctionParseNode.accept(FunctionParseNode.java:86)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:412)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:561)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:507)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:193)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:153)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:490)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:456)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:302)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:283)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1830)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:813)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> {noformat}
> That happens because negative constants are parsed as integer value * -1L, so 
> the result is long. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4534) upsert/delete/upsert for the same row corrupts the indexes

2018-05-21 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4534:
-
Fix Version/s: (was: 4.15.0)
   4.14.0

> upsert/delete/upsert for the same row corrupts the indexes
> --
>
> Key: PHOENIX-4534
> URL: https://issues.apache.org/jira/browse/PHOENIX-4534
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Romil Choksi
>Assignee: Rajeshbabu Chintaguntla
>Priority: Critical
>  Labels: HBase-2.0
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4534.patch, PHOENIX-4534_v2.patch, 
> PHOENIX-4534_v3.patch
>
>
> If we delete and upsert again the same row, the corresponding index has a 
> null value. 
> {noformat}
> 0: jdbc:phoenix:> create table a (id integer primary key, f float);
> No rows affected (2.272 seconds)
> 0: jdbc:phoenix:> create index i1 on a (f);
> No rows affected (5.769 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.021 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> | 0.5  | 1|
> +--+--+
> 1 row selected (0.016 seconds)
> 0: jdbc:phoenix:> delete from a where id = 1;
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +--+--+
> | 0:F  | :ID  |
> +--+--+
> +--+--+
> No rows selected (0.015 seconds)
> 0: jdbc:phoenix:> upsert into a values (1,0.5);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select * from i1;
> +---+--+
> |  0:F  | :ID  |
> +---+--+
> | null  | 1|
> +---+--+
> 1 row selected (0.013 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4747) UDF's integer parameter doens't accept negative constant.

2018-05-21 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483143#comment-16483143
 ] 

Sergey Soldatov commented on PHOENIX-4747:
--

Patch to reproduce the problem using existing integration test.

> UDF's integer parameter doens't accept negative constant.
> -
>
> Key: PHOENIX-4747
> URL: https://issues.apache.org/jira/browse/PHOENIX-4747
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Priority: Major
> Attachments: PHOENIX-4747-IT.patch
>
>
> If UDF has an integer parameter and we provide a negative constant it fails 
> with 
> {noformat}
> org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
> Type mismatch. expected: [INTEGER] but was: BIGINT at ADDTIME argument 2
>   at 
> org.apache.phoenix.parse.FunctionParseNode.validateFunctionArguement(FunctionParseNode.java:214)
>   at 
> org.apache.phoenix.parse.FunctionParseNode.validate(FunctionParseNode.java:193)
>   at 
> org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:331)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:700)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:585)
>   at 
> org.apache.phoenix.parse.FunctionParseNode.accept(FunctionParseNode.java:86)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:412)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:561)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:507)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:193)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:153)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:490)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:456)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:302)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:283)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1830)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:813)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> {noformat}
> That happens because negative constants are parsed as integer value * -1L, so 
> the result is long. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4747) UDF's integer parameter doens't accept negative constant.

2018-05-21 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4747:
-
Attachment: PHOENIX-4747-IT.patch

> UDF's integer parameter doens't accept negative constant.
> -
>
> Key: PHOENIX-4747
> URL: https://issues.apache.org/jira/browse/PHOENIX-4747
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Priority: Major
> Attachments: PHOENIX-4747-IT.patch
>
>
> If UDF has an integer parameter and we provide a negative constant it fails 
> with 
> {noformat}
> org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
> Type mismatch. expected: [INTEGER] but was: BIGINT at ADDTIME argument 2
>   at 
> org.apache.phoenix.parse.FunctionParseNode.validateFunctionArguement(FunctionParseNode.java:214)
>   at 
> org.apache.phoenix.parse.FunctionParseNode.validate(FunctionParseNode.java:193)
>   at 
> org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:331)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:700)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:585)
>   at 
> org.apache.phoenix.parse.FunctionParseNode.accept(FunctionParseNode.java:86)
>   at 
> org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:412)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:561)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:507)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:193)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:153)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:490)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:456)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:302)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:283)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1830)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:813)
>   at sqlline.SqlLine.begin(SqlLine.java:686)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> {noformat}
> That happens because negative constants are parsed as integer value * -1L, so 
> the result is long. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4739) Update phoenix 5.0 with hive new API getBufferedRowCount

2018-05-21 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-4739.

Resolution: Fixed

> Update phoenix 5.0 with hive new API getBufferedRowCount
> 
>
> Key: PHOENIX-4739
> URL: https://issues.apache.org/jira/browse/PHOENIX-4739
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0-alpha
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4739.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4706) phoenix-core jar bundles dependencies unnecessarily

2018-05-21 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4706:

Fix Version/s: (was: 4.15.0)
   4.14.0

> phoenix-core jar bundles dependencies unnecessarily
> ---
>
> Key: PHOENIX-4706
> URL: https://issues.apache.org/jira/browse/PHOENIX-4706
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4706.001.patch
>
>
> Got a report from some users about extra dependencies being included inside 
> the phoenix-core jar. I was a little confused about this, but, sure enough, 
> it's happening.
> Seems like this was done a very long time ago, but I'm not sure that it's 
> really something we want to do since there is a dedicated phoenix-client jar 
> now..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4706) phoenix-core jar bundles dependencies unnecessarily

2018-05-21 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483084#comment-16483084
 ] 

Josh Elser commented on PHOENIX-4706:
-

Sweet. Thanks Mujtaba! No worries on the delay.

> phoenix-core jar bundles dependencies unnecessarily
> ---
>
> Key: PHOENIX-4706
> URL: https://issues.apache.org/jira/browse/PHOENIX-4706
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4706.001.patch
>
>
> Got a report from some users about extra dependencies being included inside 
> the phoenix-core jar. I was a little confused about this, but, sure enough, 
> it's happening.
> Seems like this was done a very long time ago, but I'm not sure that it's 
> really something we want to do since there is a dedicated phoenix-client jar 
> now..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4741) Shade disruptor dependency

2018-05-21 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483076#comment-16483076
 ] 

Josh Elser commented on PHOENIX-4741:
-

{quote}I would suggest we should revisit and start publishing our shaded 
artifacts as well
{quote}
I'd support that, but would suggest you re-open PHOENIX-1567 and do that work 
there (I'm sure you would have on your own).
{quote}And, there is a bug where during "mvn install" phoenix-client.jar and 
then phoenix-server jar are copied in phoenix-core path resulting 
phoenix-server.jar to act as our phoenix-core.jar, attached patch should fix it.
{quote}
Seems separate from the original intent of this change still. Should have a new 
Jira issue for this bug?

I feel like a better fix might be to just remove that {{maven-install-plugin}} 
configuration. If we attach the shaded artifact to the build (as you would 
change in PHOENIX-1567), I think explicitly stating this plugin would be 
unnecessary... (I think ;))

> Shade disruptor dependency 
> ---
>
> Key: PHOENIX-4741
> URL: https://issues.apache.org/jira/browse/PHOENIX-4741
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Jungtaek Lim
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4741.patch
>
>
> We should shade disruptor dependency to avoid conflict with the versions used 
> by the other framework like storm , hive etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4747) UDF's integer parameter doens't accept negative constant.

2018-05-21 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created PHOENIX-4747:


 Summary: UDF's integer parameter doens't accept negative constant.
 Key: PHOENIX-4747
 URL: https://issues.apache.org/jira/browse/PHOENIX-4747
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0
Reporter: Sergey Soldatov


If UDF has an integer parameter and we provide a negative constant it fails 
with 
{noformat}
org.apache.phoenix.schema.ArgumentTypeMismatchException: ERROR 203 (22005): 
Type mismatch. expected: [INTEGER] but was: BIGINT at ADDTIME argument 2
at 
org.apache.phoenix.parse.FunctionParseNode.validateFunctionArguement(FunctionParseNode.java:214)
at 
org.apache.phoenix.parse.FunctionParseNode.validate(FunctionParseNode.java:193)
at 
org.apache.phoenix.compile.ExpressionCompiler.visitLeave(ExpressionCompiler.java:331)
at 
org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:700)
at 
org.apache.phoenix.compile.ProjectionCompiler$SelectClauseVisitor.visitLeave(ProjectionCompiler.java:585)
at 
org.apache.phoenix.parse.FunctionParseNode.accept(FunctionParseNode.java:86)
at 
org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:412)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:561)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:507)
at 
org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:193)
at 
org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:153)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:490)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:456)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:302)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:283)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1830)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
{noformat}
That happens because negative constants are parsed as integer value * -1L, so 
the result is long. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4745) Update Tephra version to 0.14.0-incubating

2018-05-21 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483068#comment-16483068
 ] 

Ankit Singhal edited comment on PHOENIX-4745 at 5/21/18 9:16 PM:
-

[~jamestaylor] /[~poornachandra] , Is the release of tephra 0.14.0-incubating 
happened? I don't see any formal announcements for RC/release yet?


was (Author: an...@apache.org):
[~jamestaylor] /[~poornachandra] , Is the release of tephra 0.14.0-incubating 
happened? As I can see the updates in pom and jars in maven repository but not 
seen any formal announcement on tephra dev group.

> Update Tephra version to 0.14.0-incubating
> --
>
> Key: PHOENIX-4745
> URL: https://issues.apache.org/jira/browse/PHOENIX-4745
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4745.patch
>
>
> Update to Tephra 0.14.0-incubating, mainly for HBase 1.4 and HBase 2.0 compat 
> modules.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4745) Update Tephra version to 0.14.0-incubating

2018-05-21 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483068#comment-16483068
 ] 

Ankit Singhal commented on PHOENIX-4745:


[~jamestaylor] /[~poornachandra] , Is the release of tephra 0.14.0-incubating 
happened? As I can see the updates in pom and jars in maven repository but not 
seen any formal announcement on tephra dev group.

> Update Tephra version to 0.14.0-incubating
> --
>
> Key: PHOENIX-4745
> URL: https://issues.apache.org/jira/browse/PHOENIX-4745
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4745.patch
>
>
> Update to Tephra 0.14.0-incubating, mainly for HBase 1.4 and HBase 2.0 compat 
> modules.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4741) Shade disruptor dependency

2018-05-21 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4741:
---
Attachment: PHOENIX-4741.patch

> Shade disruptor dependency 
> ---
>
> Key: PHOENIX-4741
> URL: https://issues.apache.org/jira/browse/PHOENIX-4741
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Jungtaek Lim
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4741.patch
>
>
> We should shade disruptor dependency to avoid conflict with the versions used 
> by the other framework like storm , hive etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4741) Shade disruptor dependency

2018-05-21 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483066#comment-16483066
 ] 

Ankit Singhal commented on PHOENIX-4741:


{quote}If we're already shading, shouldn't we close this as "Not a Problem"?
{quote}
[~jamestaylor], yes we are shading disruptor library in phoenix-client but not 
publishing it to maven. Thus the end user relying on fat jar has no way to 
include it in their application pom.

[~elserj] pointed out to me PHOENIX-1567 where we had the similar discussion on 
whether to publish such fat jars on maven or not and the conclusion seems to be 
that phoenix-core as maven dependency should be enough but at that time we 
don't shade our client/servers and there are not many cases of transitive 
dependencies which will have conflict with the runtime libraries version, so I 
would suggest we should revisit and start publishing our shaded artifacts as 
well. Let me know if it's fine, I'll update the 
release|http://phoenix.apache.org/release.html] accordingly.

And, there is a bug where during "mvn install" phoenix-client.jar and then 
phoenix-server jar are copied in phoenix-core path resulting phoenix-server.jar 
to act as our phoenix-core.jar, attached patch should fix it.

> Shade disruptor dependency 
> ---
>
> Key: PHOENIX-4741
> URL: https://issues.apache.org/jira/browse/PHOENIX-4741
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Jungtaek Lim
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4741.patch
>
>
> We should shade disruptor dependency to avoid conflict with the versions used 
> by the other framework like storm , hive etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4692) ArrayIndexOutOfBoundsException in ScanRanges.intersectScan

2018-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483040#comment-16483040
 ] 

Hudson commented on PHOENIX-4692:
-

ABORTED: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #140 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/140/])
PHOENIX-4692 ArrayIndexOutOfBoundsException in ScanRanges.intersectScan 
(maryannxue: rev b1165008230e212cfec1e52d86c6945176ad1d60)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/WhereCompiler.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/SkipScanQueryIT.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/execute/HashJoinPlan.java


> ArrayIndexOutOfBoundsException in ScanRanges.intersectScan
> --
>
> Key: PHOENIX-4692
> URL: https://issues.apache.org/jira/browse/PHOENIX-4692
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4692-IT.patch, PHOENIX-4692_v1.patch, 
> PHOENIX-4692_v2.patch
>
>
> ScanRanges.intersectScan may fail with AIOOBE if a salted table is used.
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at org.apache.phoenix.util.ScanUtil.getKey(ScanUtil.java:333)
>   at org.apache.phoenix.util.ScanUtil.getMinKey(ScanUtil.java:317)
>   at 
> org.apache.phoenix.compile.ScanRanges.intersectScan(ScanRanges.java:371)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:1074)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:631)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:501)
>   at 
> org.apache.phoenix.iterate.ParallelIterators.(ParallelIterators.java:62)
>   at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:274)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:364)
>   at 
> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:234)
>   at 
> org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:144)
>   at 
> org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:139)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:314)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:293)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:292)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:285)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1798)
> {noformat}
> Script to reproduce:
> {noformat}
> CREATE TABLE TEST (PK1 INTEGER NOT NULL, PK2 INTEGER NOT NULL,  ID1 INTEGER, 
> ID2 INTEGER CONSTRAINT PK PRIMARY KEY(PK1 , PK2))SALT_BUCKETS = 4;
> upsert into test values (1,1,1,1);
> upsert into test values (2,2,2,2);
> upsert into test values (2,3,1,2);
> create view TEST_VIEW as select * from TEST where PK1 in (1,2);
> CREATE INDEX IDX_VIEW ON TEST_VIEW (ID1);
>   select /*+ INDEX(TEST_VIEW IDX_VIEW) */ * from TEST_VIEW where ID1 = 1  
> ORDER BY ID2 LIMIT 500 OFFSET 0;
> {noformat}
> That happens because we have a point lookup optimization which reduces 
> RowKeySchema to a single field, while we have more than one slot due salting. 
> [~jamestaylor] can you please take a look? I'm not sure whether it should be 
> fixed on the ScanUtil level or we just should not use point lookup in such 
> cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4692) ArrayIndexOutOfBoundsException in ScanRanges.intersectScan

2018-05-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483002#comment-16483002
 ] 

Hudson commented on PHOENIX-4692:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1900 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1900/])
PHOENIX-4692 ArrayIndexOutOfBoundsException in ScanRanges.intersectScan 
(maryannxue: rev 097881c3fad5bb746d2a58ae3f90d52d40121cfc)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/SkipScanQueryIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/WhereCompiler.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/execute/HashJoinPlan.java


> ArrayIndexOutOfBoundsException in ScanRanges.intersectScan
> --
>
> Key: PHOENIX-4692
> URL: https://issues.apache.org/jira/browse/PHOENIX-4692
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4692-IT.patch, PHOENIX-4692_v1.patch, 
> PHOENIX-4692_v2.patch
>
>
> ScanRanges.intersectScan may fail with AIOOBE if a salted table is used.
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at org.apache.phoenix.util.ScanUtil.getKey(ScanUtil.java:333)
>   at org.apache.phoenix.util.ScanUtil.getMinKey(ScanUtil.java:317)
>   at 
> org.apache.phoenix.compile.ScanRanges.intersectScan(ScanRanges.java:371)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:1074)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:631)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:501)
>   at 
> org.apache.phoenix.iterate.ParallelIterators.(ParallelIterators.java:62)
>   at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:274)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:364)
>   at 
> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:234)
>   at 
> org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:144)
>   at 
> org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:139)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:314)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:293)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:292)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:285)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1798)
> {noformat}
> Script to reproduce:
> {noformat}
> CREATE TABLE TEST (PK1 INTEGER NOT NULL, PK2 INTEGER NOT NULL,  ID1 INTEGER, 
> ID2 INTEGER CONSTRAINT PK PRIMARY KEY(PK1 , PK2))SALT_BUCKETS = 4;
> upsert into test values (1,1,1,1);
> upsert into test values (2,2,2,2);
> upsert into test values (2,3,1,2);
> create view TEST_VIEW as select * from TEST where PK1 in (1,2);
> CREATE INDEX IDX_VIEW ON TEST_VIEW (ID1);
>   select /*+ INDEX(TEST_VIEW IDX_VIEW) */ * from TEST_VIEW where ID1 = 1  
> ORDER BY ID2 LIMIT 500 OFFSET 0;
> {noformat}
> That happens because we have a point lookup optimization which reduces 
> RowKeySchema to a single field, while we have more than one slot due salting. 
> [~jamestaylor] can you please take a look? I'm not sure whether it should be 
> fixed on the ScanUtil level or we just should not use point lookup in such 
> cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-1567) Publish Phoenix-Client & Phoenix-Server jars into Maven Repo

2018-05-21 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482624#comment-16482624
 ] 

Josh Elser commented on PHOENIX-1567:
-

FYI [~an...@apache.org]

> Publish Phoenix-Client & Phoenix-Server jars into Maven Repo
> 
>
> Key: PHOENIX-1567
> URL: https://issues.apache.org/jira/browse/PHOENIX-1567
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Jeffrey Zhong
>Priority: Major
>
> Phoenix doesn't publish Phoenix Client & Server jars into Maven repository. 
> This make things quite hard for down steam projects/applications to use maven 
> to resolve dependencies.
> I tried to modify the pom.xml under phoenix-assembly while it shows the 
> following. 
> {noformat}
> [INFO] Installing 
> /Users/jzhong/work/phoenix_apache/checkins/phoenix/phoenix-assembly/target/phoenix-4.3.0-SNAPSHOT-client.jar
>  
> to 
> /Users/jzhong/.m2/repository/org/apache/phoenix/phoenix-assembly/4.3.0-SNAPSHOT/phoenix-assembly-4.3.0-SNAPSHOT-client.jar
> {noformat}
> Basically the jar published to maven repo will become  
> phoenix-assembly-4.3.0-SNAPSHOT-client.jar or 
> phoenix-assembly-4.3.0-SNAPSHOT-server.jar
> The artifact id "phoenix-assembly" has to be the prefix of the names of jars.
> Therefore, the possible solutions are:
> 1) rename current client & server jar to phoenix-assembly-clinet/server.jar 
> to match the jars published to maven repo.
> 2) rename phoenix-assembly to something more meaningful and rename our client 
> & server jars accordingly
> 3) split phoenix-assembly and move the corresponding artifacts into 
> phoenix-client & phoenix-server folders. Phoenix-assembly will only create 
> tar ball files.
> [~giacomotaylor], [~apurtell] or other maven experts: Any suggestion on this? 
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)