[jira] [Commented] (PHOENIX-2841) UDF not working with sort merge join

2016-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15247232#comment-15247232
 ] 

Hadoop QA commented on PHOENIX-2841:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12799314/PHOENIX-2841.patch
  against master branch at commit a31b70179cbc468cc3fe890bd615fc71e0bac5a2.
  ATTACHMENT ID: 12799314

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
27 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 4 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+String query = "SELECT /*+ USE_SORT_MERGE_JOIN*/ item.\"item_id\", 
item.name, supp.\"supplier_id\", myreverse(supp.name) FROM "
+"create function myreverse(VARCHAR) returns VARCHAR as 
'org.apache.phoenix.end2end.MyReverse' using jar "
++ "'" + 
util.getConfiguration().get(DYNAMIC_JARS_DIR_KEY) + "/myjar1.jar" + "'");
+ColumnResolver resolver = 
FromCompiler.getResolverForProjectedTable(projectedTable, 
context.getConnection(), joinTable.getStatement().getUdfParseNodes());

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.SubqueryUsingSortMergeJoinIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.SaltedViewIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ViewIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.SubqueryIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.UserDefinedFunctionsIT

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hdfs.server.namenode.TestEditLog.testFuzzSequences(TestEditLog.java:1380)

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/302//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/302//artifact/patchprocess/patchReleaseAuditWarnings.txt
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/302//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/302//console

This message is automatically generated.

> UDF not working with sort merge join
> 
>
> Key: PHOENIX-2841
> URL: https://issues.apache.org/jira/browse/PHOENIX-2841
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Attachments: PHOENIX-2841.patch
>
>
> User reported following issue on community portal.
> https://community.hortonworks.com/questions/26693/orgapachephoenixschemafunctionnotfoundexception-er.html
> UDFs are not getting resolved whenever sort merge join is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2743) HivePhoenixHandler for big-big join with predicate push down

2016-04-18 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15247173#comment-15247173
 ] 

Sergey Soldatov commented on PHOENIX-2743:
--

Yep, that's because of the recent commit about splits. Got it fixed, testing at 
the moment. I will make a new pull request rebased on the master.

> HivePhoenixHandler for big-big join with predicate push down
> 
>
> Key: PHOENIX-2743
> URL: https://issues.apache.org/jira/browse/PHOENIX-2743
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.5.0, 4.6.0
> Environment: hive-1.2.1
>Reporter: JeongMin Ju
>  Labels: features, performance
> Attachments: PHOENIX-2743-1.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Phoenix support hash join & sort-merge join. But in case of big*big join does 
> not process well.
> Therefore Need other method like Hive.
> I implemented hive-phoenix-handler that can access Apache Phoenix table on 
> HBase using HiveQL.
> hive-phoenix-handler is very faster than hive-hbase-handler because of 
> applying predicate push down.
> I am publishing source code to github for contribution and maybe will be 
> completed by next week.
> https://github.com/mini666/hive-phoenix-handler
> please, review my proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2743) HivePhoenixHandler for big-big join with predicate push down

2016-04-18 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15247160#comment-15247160
 ] 

Josh Elser commented on PHOENIX-2743:
-

[~sergey.soldatov], some compilation issues on the latest?

{noformat}
[INFO] -
[ERROR] COMPILATION ERROR :
[INFO] -
[ERROR] 
/Users/jelser/projects/phoenix.git/phoenix-hive/src/main/java/org/apache/phoenix/hive/mapreduce/PhoenixRecordReader.java:[116,65]
 no suitable constructor found for 
TableResultIterator(org.apache.phoenix.execute.MutationState,org.apache.phoenix.schema.TableRef,org.apache.hadoop.hbase.client.Scan,org.apache.phoenix.monitoring.CombinableMetric,long)
constructor 
org.apache.phoenix.iterate.TableResultIterator.TableResultIterator(org.apache.phoenix.execute.MutationState,org.apache.hadoop.hbase.client.Scan,org.apache.phoenix.monitoring.CombinableMetric,long,org.apache.phoenix.iterate.PeekingResultIterator,org.apache.phoenix.compile.QueryPlan,org.apache.phoenix.iterate.ParallelScanGrouper)
 is not applicable
  (actual and formal argument lists differ in length)
constructor 
org.apache.phoenix.iterate.TableResultIterator.TableResultIterator(org.apache.phoenix.execute.MutationState,org.apache.hadoop.hbase.client.Scan,org.apache.phoenix.monitoring.CombinableMetric,long,org.apache.phoenix.compile.QueryPlan,org.apache.phoenix.iterate.ParallelScanGrouper)
 is not applicable
  (actual and formal argument lists differ in length)
constructor 
org.apache.phoenix.iterate.TableResultIterator.TableResultIterator() is not 
applicable
  (actual and formal argument lists differ in length)
[INFO] 1 error
{noformat}

> HivePhoenixHandler for big-big join with predicate push down
> 
>
> Key: PHOENIX-2743
> URL: https://issues.apache.org/jira/browse/PHOENIX-2743
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.5.0, 4.6.0
> Environment: hive-1.2.1
>Reporter: JeongMin Ju
>  Labels: features, performance
> Attachments: PHOENIX-2743-1.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Phoenix support hash join & sort-merge join. But in case of big*big join does 
> not process well.
> Therefore Need other method like Hive.
> I implemented hive-phoenix-handler that can access Apache Phoenix table on 
> HBase using HiveQL.
> hive-phoenix-handler is very faster than hive-hbase-handler because of 
> applying predicate push down.
> I am publishing source code to github for contribution and maybe will be 
> completed by next week.
> https://github.com/mini666/hive-phoenix-handler
> please, review my proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-2842) Queries with offset shouldn't be using Spooling and chunked result iterators

2016-04-18 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain reassigned PHOENIX-2842:
-

Assignee: Samarth Jain  (was: Ankit Singhal)

> Queries with offset shouldn't be using Spooling and chunked result iterators
> 
>
> Key: PHOENIX-2842
> URL: https://issues.apache.org/jira/browse/PHOENIX-2842
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>
> While reworking our serial queries for PHOENIX-2724, I noticed that queries 
> with offset are always using SpoolingResultIterator and ChunkedResultIterator 
> when there is no non-row key order by. 
> In ScanPlan.java
> {code}
> private static ParallelIteratorFactory 
> buildResultIteratorFactory(StatementContext context, FilterableStatement 
> statement,
> TableRef table, OrderBy orderBy, Integer limit,Integer offset, 
> boolean allowPageFilter) throws SQLException {
> if ((isSerial(context, statement, table, orderBy, limit, offset, 
> allowPageFilter)
> || ScanUtil.isRoundRobinPossible(orderBy, context) || 
> ScanUtil.isPacingScannersPossible(context))
> && *offset == null*) { return 
> ParallelIteratorFactory.NOOP_FACTORY; }
> {code}
> Spooling and chunking is deprecated. So the code shouldn't be relying on them 
> to be used always. Removing the offset != null check, unfortunately causes 
> tests to fail in QueryWithOffsetIT.java. This is because if we don't create 
> chunked result iterators, server side scanners are not getting advanced up to 
> the offset. 
> [~ankit.singhal] - can you please check?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2842) Queries with offset shouldn't be using Spooling and chunked result iterators

2016-04-18 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15246866#comment-15246866
 ] 

Samarth Jain commented on PHOENIX-2842:
---

Actually, I think I will take up this JIRA myself since the work needed here 
affects my patch for PHOENIX-2724. 

> Queries with offset shouldn't be using Spooling and chunked result iterators
> 
>
> Key: PHOENIX-2842
> URL: https://issues.apache.org/jira/browse/PHOENIX-2842
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Ankit Singhal
>
> While reworking our serial queries for PHOENIX-2724, I noticed that queries 
> with offset are always using SpoolingResultIterator and ChunkedResultIterator 
> when there is no non-row key order by. 
> In ScanPlan.java
> {code}
> private static ParallelIteratorFactory 
> buildResultIteratorFactory(StatementContext context, FilterableStatement 
> statement,
> TableRef table, OrderBy orderBy, Integer limit,Integer offset, 
> boolean allowPageFilter) throws SQLException {
> if ((isSerial(context, statement, table, orderBy, limit, offset, 
> allowPageFilter)
> || ScanUtil.isRoundRobinPossible(orderBy, context) || 
> ScanUtil.isPacingScannersPossible(context))
> && *offset == null*) { return 
> ParallelIteratorFactory.NOOP_FACTORY; }
> {code}
> Spooling and chunking is deprecated. So the code shouldn't be relying on them 
> to be used always. Removing the offset != null check, unfortunately causes 
> tests to fail in QueryWithOffsetIT.java. This is because if we don't create 
> chunked result iterators, server side scanners are not getting advanced up to 
> the offset. 
> [~ankit.singhal] - can you please check?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2842) Queries with offset shouldn't be using Spooling and chunked result iterators

2016-04-18 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-2842:
-

 Summary: Queries with offset shouldn't be using Spooling and 
chunked result iterators
 Key: PHOENIX-2842
 URL: https://issues.apache.org/jira/browse/PHOENIX-2842
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain
Assignee: Ankit Singhal


While reworking our serial queries for PHOENIX-2724, I noticed that queries 
with offset are always using SpoolingResultIterator and ChunkedResultIterator 
when there is no non-row key order by. 

In ScanPlan.java
{code}
private static ParallelIteratorFactory 
buildResultIteratorFactory(StatementContext context, FilterableStatement 
statement,
TableRef table, OrderBy orderBy, Integer limit,Integer offset, 
boolean allowPageFilter) throws SQLException {

if ((isSerial(context, statement, table, orderBy, limit, offset, 
allowPageFilter)
|| ScanUtil.isRoundRobinPossible(orderBy, context) || 
ScanUtil.isPacingScannersPossible(context))
&& *offset == null*) { return 
ParallelIteratorFactory.NOOP_FACTORY; }

{code}

Spooling and chunking is deprecated. So the code shouldn't be relying on them 
to be used always. Removing the offset != null check, unfortunately causes 
tests to fail in QueryWithOffsetIT.java. This is because if we don't create 
chunked result iterators, server side scanners are not getting advanced up to 
the offset. 

[~ankit.singhal] - can you please check?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Phoenix for CDH5 compatibility

2016-04-18 Thread Swapna Swapna
HI James,

I've downloaded the phoenix-for-cloudera-4.6-HBase-1.0-cdh5.4.zip from the
below provided link, and using the branch "4.6-Hbase-1.0-cdh5.4"
https://github.com/chiastic-security/phoenix-for-cloudera

After copying the *phoenix-4.6.0-cdh5.4.5-server.jar  *to*
/usr/lib/hbase/lib, *still getting the below exception. But this time its a
different error than when using from Phoenix downloads.

*Error: org.apache.phoenix.exception.PhoenixIOException:
org.apache.hadoop.hbase.DoNotRetryIOException:
java.lang.AbstractMethodError*

* at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2065)*

* at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)*

* at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)*

* at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)*

* at java.lang.Thread.run(Thread.java:745)*

*Caused by: java.lang.AbstractMethodError*

* at
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2278)*

* at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)*

* at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2034)*

* ... 4 more (state=08000,code=101)*

*org.apache.phoenix.exception.PhoenixIOException:
org.apache.phoenix.exception.PhoenixIOException:
org.apache.hadoop.hbase.DoNotRetryIOException:
java.lang.AbstractMethodError*

* at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2065)*

* at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)*

* at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)*

* at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)*

* at java.lang.Thread.run(Thread.java:745)*

*Caused by: java.lang.AbstractMethodError*

* at
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2278)*

* at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)*

* at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2034)*

* ... 4 more*

On Sat, Apr 16, 2016 at 7:41 AM, James Taylor 
wrote:

> There's also the branch that Andrew setup here:
> https://github.com/chiastic-security/phoenix-for-cloudera
>
>
> On Saturday, April 16, 2016, rafa  wrote:
>
>> Hi Swapna,
>>
>> You can download the  official parcel from Cloudera, although it is not
>> the last phoenix version.
>>
>> http://archive.cloudera.com/cloudera-labs/phoenix/parcels/latest/
>>
>> If you want to use higher versions you'll have to compile them against
>> the cdh  libraries
>>
>> Regards,
>> Rafa
>> El 16/04/2016 12:17, "Swapna Swapna"  escribió:
>>
>>> Hi,
>>>
>>>
>>> I was able to use phoenix-4.6.0-HBase-1 in a standalone machine but when
>>> i tried to use with cdh5, its throwing the below exception. After doing
>>> some research, noticed this seems to be a common problem faced by many
>>> others as well.
>>>
>>> Caused by: java.lang.NoSuchMethodError:
>>> org.apache.hadoop.hbase.client.Scan.setRaw(Z)Lorg/apache/hadoop/hbase/client/Scan;
>>>
>>> at
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildDeletedTable(MetaDataEndpointImpl.java:973)
>>>
>>> at
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1049)
>>>
>>> at
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1223)
>>>
>>> at
>>> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1199)
>>>
>>> at
>>> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
>>>
>>> at
>>> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300)
>>>
>>> at
>>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:31913)
>>>
>>> at
>>> org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1605)
>>>
>>>
>>> Please suggest me, from where I can download the Phoenix binary/source
>>> (for hbase 1.0) to be compatible with cdh5.
>>>
>>> Regards
>>>
>>> Swapna
>>>
>>>


[jira] [Resolved] (PHOENIX-2836) Add per-table metrics for memstore, storefile and region tablesize

2016-04-18 Thread Alicia Ying Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alicia Ying Shu resolved PHOENIX-2836.
--
Resolution: Invalid

> Add per-table metrics for memstore, storefile and region tablesize 
> ---
>
> Key: PHOENIX-2836
> URL: https://issues.apache.org/jira/browse/PHOENIX-2836
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2839) support for hbase 1.2 codebase

2016-04-18 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15246475#comment-15246475
 ] 

James Taylor commented on PHOENIX-2839:
---

Will this cause a runtime exception if the server is HBase 1.1.x?

> support for hbase 1.2 codebase
> --
>
> Key: PHOENIX-2839
> URL: https://issues.apache.org/jira/browse/PHOENIX-2839
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Davide Gesino
>Assignee: churro morales
>Priority: Critical
> Attachments: PHOENIX-2839.patch
>
>
> It would be great if phoenix will support anytime in the future Hbase release 
> 1.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2839) support for hbase 1.2 codebase

2016-04-18 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated PHOENIX-2839:

Attachment: PHOENIX-2839.patch

This patch should only apply to master branch.  After talking to [~jamestaylor] 
we decided to have master support hbase-1.2 as there were some API from 1.1 to 
1.2 which we were relying on.

> support for hbase 1.2 codebase
> --
>
> Key: PHOENIX-2839
> URL: https://issues.apache.org/jira/browse/PHOENIX-2839
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Davide Gesino
>Assignee: churro morales
>Priority: Critical
> Attachments: PHOENIX-2839.patch
>
>
> It would be great if phoenix will support anytime in the future Hbase release 
> 1.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2743) HivePhoenixHandler for big-big join with predicate push down

2016-04-18 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15246447#comment-15246447
 ] 

Josh Elser commented on PHOENIX-2743:
-

Fantastic. I will do the same!

> HivePhoenixHandler for big-big join with predicate push down
> 
>
> Key: PHOENIX-2743
> URL: https://issues.apache.org/jira/browse/PHOENIX-2743
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.5.0, 4.6.0
> Environment: hive-1.2.1
>Reporter: JeongMin Ju
>  Labels: features, performance
> Attachments: PHOENIX-2743-1.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Phoenix support hash join & sort-merge join. But in case of big*big join does 
> not process well.
> Therefore Need other method like Hive.
> I implemented hive-phoenix-handler that can access Apache Phoenix table on 
> HBase using HiveQL.
> hive-phoenix-handler is very faster than hive-hbase-handler because of 
> applying predicate push down.
> I am publishing source code to github for contribution and maybe will be 
> completed by next week.
> https://github.com/mini666/hive-phoenix-handler
> please, review my proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2743) HivePhoenixHandler for big-big join with predicate push down

2016-04-18 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15246440#comment-15246440
 ] 

Sergey Soldatov commented on PHOENIX-2743:
--

[~elserj] Pushed. Was trying to check that changing to 2.7.1 doesn't cause any 
other IT failures. Actually it's still running, but no new failures so far. 
Will update if anything else fail.

> HivePhoenixHandler for big-big join with predicate push down
> 
>
> Key: PHOENIX-2743
> URL: https://issues.apache.org/jira/browse/PHOENIX-2743
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.5.0, 4.6.0
> Environment: hive-1.2.1
>Reporter: JeongMin Ju
>  Labels: features, performance
> Attachments: PHOENIX-2743-1.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Phoenix support hash join & sort-merge join. But in case of big*big join does 
> not process well.
> Therefore Need other method like Hive.
> I implemented hive-phoenix-handler that can access Apache Phoenix table on 
> HBase using HiveQL.
> hive-phoenix-handler is very faster than hive-hbase-handler because of 
> applying predicate push down.
> I am publishing source code to github for contribution and maybe will be 
> completed by next week.
> https://github.com/mini666/hive-phoenix-handler
> please, review my proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2743) HivePhoenixHandler for big-big join with predicate push down

2016-04-18 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15246404#comment-15246404
 ] 

Josh Elser commented on PHOENIX-2743:
-

bq. Got it. As for the exception - it comes that 2.5.1 hadoop-auth comes with 
hbase. Will fix those shortly.

Just so it's not silent: "ok". Just ping me when you get this updated -- I 
checked GH and didn't see any new commits. If it's helpful at all, I was using 
`mvn dependency:tree` on the command line to track this down myself.

> HivePhoenixHandler for big-big join with predicate push down
> 
>
> Key: PHOENIX-2743
> URL: https://issues.apache.org/jira/browse/PHOENIX-2743
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.5.0, 4.6.0
> Environment: hive-1.2.1
>Reporter: JeongMin Ju
>  Labels: features, performance
> Attachments: PHOENIX-2743-1.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Phoenix support hash join & sort-merge join. But in case of big*big join does 
> not process well.
> Therefore Need other method like Hive.
> I implemented hive-phoenix-handler that can access Apache Phoenix table on 
> HBase using HiveQL.
> hive-phoenix-handler is very faster than hive-hbase-handler because of 
> applying predicate push down.
> I am publishing source code to github for contribution and maybe will be 
> completed by next week.
> https://github.com/mini666/hive-phoenix-handler
> please, review my proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2841) UDF not working with sort merge join

2016-04-18 Thread Maryann Xue (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15246391#comment-15246391
 ] 

Maryann Xue commented on PHOENIX-2841:
--

Yes, [~ankit.singhal], I think that's correct. Thanks a lot for fixing the 
issue!

> UDF not working with sort merge join
> 
>
> Key: PHOENIX-2841
> URL: https://issues.apache.org/jira/browse/PHOENIX-2841
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Attachments: PHOENIX-2841.patch
>
>
> User reported following issue on community portal.
> https://community.hortonworks.com/questions/26693/orgapachephoenixschemafunctionnotfoundexception-er.html
> UDFs are not getting resolved whenever sort merge join is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-2839) support for hbase 1.2 codebase

2016-04-18 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales reassigned PHOENIX-2839:
---

Assignee: churro morales

> support for hbase 1.2 codebase
> --
>
> Key: PHOENIX-2839
> URL: https://issues.apache.org/jira/browse/PHOENIX-2839
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Davide Gesino
>Assignee: churro morales
>Priority: Critical
>
> It would be great if phoenix will support anytime in the future Hbase release 
> 1.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1701) Adapt guidepost selection at compile time

2016-04-18 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15246264#comment-15246264
 ] 

James Taylor commented on PHOENIX-1701:
---

Food for thought, [~churromorales].

> Adapt guidepost selection at compile time
> -
>
> Key: PHOENIX-1701
> URL: https://issues.apache.org/jira/browse/PHOENIX-1701
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>
> Currently we tweak the guide post width for the partition size we want - for 
> example it just changed to 100mb to 300mb because FAST_DIFF is used by 
> default.
> Instead it might better to collect more guideposts (maybe even as low as 
> every 10mb) and then combine them at compile time into larger chunks.
> If we store them correctly the adjacent guideposts would be stored in order 
> in the stats table and hence we would scan that table until we have a size we 
> want (in terms of chunk size).
> The more information we have, the better, we can combine smaller guideposts, 
> but we cannot split larger ones because we lack information.
> Just filing as a brainstorming issue for debate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2841) UDF not working sort merge join

2016-04-18 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-2841:
---
Attachment: PHOENIX-2841.patch

PFA, patch to fix the problem.
[~maryannxue], can you look and confirm if it is the right way to fix the 
problem.

> UDF not working sort merge join
> ---
>
> Key: PHOENIX-2841
> URL: https://issues.apache.org/jira/browse/PHOENIX-2841
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Attachments: PHOENIX-2841.patch
>
>
> User reported following issue on community portal.
> https://community.hortonworks.com/questions/26693/orgapachephoenixschemafunctionnotfoundexception-er.html
> UDFs are not getting resolved whenever sort merge join is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2841) UDF not working sort merge join

2016-04-18 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-2841:
--

 Summary: UDF not working sort merge join
 Key: PHOENIX-2841
 URL: https://issues.apache.org/jira/browse/PHOENIX-2841
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal
Assignee: Ankit Singhal


User reported following issue on community portal.
https://community.hortonworks.com/questions/26693/orgapachephoenixschemafunctionnotfoundexception-er.html

UDFs are not getting resolved whenever sort merge join is used.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2743) HivePhoenixHandler for big-big join with predicate push down

2016-04-18 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15246155#comment-15246155
 ] 

Josh Elser commented on PHOENIX-2743:
-

bq. This error usually happen when hadoop-auth version is different from 
hadoop-common

Yeah, right you are again. hbase dependencies are pulling in hadoop-2.5.1 still.

> HivePhoenixHandler for big-big join with predicate push down
> 
>
> Key: PHOENIX-2743
> URL: https://issues.apache.org/jira/browse/PHOENIX-2743
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.5.0, 4.6.0
> Environment: hive-1.2.1
>Reporter: JeongMin Ju
>  Labels: features, performance
> Attachments: PHOENIX-2743-1.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Phoenix support hash join & sort-merge join. But in case of big*big join does 
> not process well.
> Therefore Need other method like Hive.
> I implemented hive-phoenix-handler that can access Apache Phoenix table on 
> HBase using HiveQL.
> hive-phoenix-handler is very faster than hive-hbase-handler because of 
> applying predicate push down.
> I am publishing source code to github for contribution and maybe will be 
> completed by next week.
> https://github.com/mini666/hive-phoenix-handler
> please, review my proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2743) HivePhoenixHandler for big-big join with predicate push down

2016-04-18 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15246123#comment-15246123
 ] 

Sergey Soldatov commented on PHOENIX-2743:
--

Interesting. This error usually happen when hadoop-auth version is different 
from hadoop-common. For last changes I run IT tests in the IDE, let me rerun it 
from maven. 

> HivePhoenixHandler for big-big join with predicate push down
> 
>
> Key: PHOENIX-2743
> URL: https://issues.apache.org/jira/browse/PHOENIX-2743
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.5.0, 4.6.0
> Environment: hive-1.2.1
>Reporter: JeongMin Ju
>  Labels: features, performance
> Attachments: PHOENIX-2743-1.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Phoenix support hash join & sort-merge join. But in case of big*big join does 
> not process well.
> Therefore Need other method like Hive.
> I implemented hive-phoenix-handler that can access Apache Phoenix table on 
> HBase using HiveQL.
> hive-phoenix-handler is very faster than hive-hbase-handler because of 
> applying predicate push down.
> I am publishing source code to github for contribution and maybe will be 
> completed by next week.
> https://github.com/mini666/hive-phoenix-handler
> please, review my proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2743) HivePhoenixHandler for big-big join with predicate push down

2016-04-18 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15246149#comment-15246149
 ] 

Sergey Soldatov commented on PHOENIX-2743:
--

Got it. As for the exception - it comes that 2.5.1 hadoop-auth comes with 
hbase. Will fix those shortly.

> HivePhoenixHandler for big-big join with predicate push down
> 
>
> Key: PHOENIX-2743
> URL: https://issues.apache.org/jira/browse/PHOENIX-2743
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.5.0, 4.6.0
> Environment: hive-1.2.1
>Reporter: JeongMin Ju
>  Labels: features, performance
> Attachments: PHOENIX-2743-1.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Phoenix support hash join & sort-merge join. But in case of big*big join does 
> not process well.
> Therefore Need other method like Hive.
> I implemented hive-phoenix-handler that can access Apache Phoenix table on 
> HBase using HiveQL.
> hive-phoenix-handler is very faster than hive-hbase-handler because of 
> applying predicate push down.
> I am publishing source code to github for contribution and maybe will be 
> completed by next week.
> https://github.com/mini666/hive-phoenix-handler
> please, review my proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2743) HivePhoenixHandler for big-big join with predicate push down

2016-04-18 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15246124#comment-15246124
 ] 

Josh Elser commented on PHOENIX-2743:
-

{quote}
bq. I would assume that it's not necessary for any of them, but let me know 
if I'm wrong

Actually I added that only for the packages that are not in the dependencies in 
the root pom.xml. Not sure whether it's good to include them there since they 
are used in this package only and the module has it's own artifact. If we want 
to make it as a part of phoenix-client.jar, those dependencies can be moved to 
the root pom.xml and specify the version there.
{quote}

Ah! I'm with you now. Convention states that we should have all dependencies 
declared in dependencyManagement and just refer to dependencies by 
groupId:artifactId in sub-modules. I will clean that up too (super easy to 
change). Thanks.

> HivePhoenixHandler for big-big join with predicate push down
> 
>
> Key: PHOENIX-2743
> URL: https://issues.apache.org/jira/browse/PHOENIX-2743
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.5.0, 4.6.0
> Environment: hive-1.2.1
>Reporter: JeongMin Ju
>  Labels: features, performance
> Attachments: PHOENIX-2743-1.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Phoenix support hash join & sort-merge join. But in case of big*big join does 
> not process well.
> Therefore Need other method like Hive.
> I implemented hive-phoenix-handler that can access Apache Phoenix table on 
> HBase using HiveQL.
> hive-phoenix-handler is very faster than hive-hbase-handler because of 
> applying predicate push down.
> I am publishing source code to github for contribution and maybe will be 
> completed by next week.
> https://github.com/mini666/hive-phoenix-handler
> please, review my proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2840) Fix flapping MemoryManagerTest.testWaitForMemoryAvailable unit test

2016-04-18 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15246078#comment-15246078
 ] 

James Taylor commented on PHOENIX-2840:
---

[~samarthjain] - any thoughts?

> Fix flapping MemoryManagerTest.testWaitForMemoryAvailable unit test
> ---
>
> Key: PHOENIX-2840
> URL: https://issues.apache.org/jira/browse/PHOENIX-2840
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: churro morales
>
> Looks like MemoryManagerTest.testWaitForMemoryAvailable is flapping.
> {code}
> https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/443/testReport/junit/org.apache.phoenix.memory/MemoryManagerTest/testWaitForMemoryAvailable/
> {code}
> I wonder if perhaps we should change reuseForks to false for our fast unit 
> tests here in the pom.xml:
> {code}
>   
> org.apache.maven.plugins
> maven-surefire-plugin
> ${maven-surefire-plugin.version}
> 
>   ${numForkedUT}
>   true
>   -enableassertions -Xmx2250m -XX:MaxPermSize=128m
> -Djava.security.egd=file:/dev/./urandom 
> "-Djava.library.path=${hadoop.library.path}${path.separator}${java.library.path}"
>   
> ${test.output.tofile}
>   kill
> 
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2840) Fix flapping MemoryManagerTest.testWaitForMemoryAvailable unit test

2016-04-18 Thread James Taylor (JIRA)
James Taylor created PHOENIX-2840:
-

 Summary: Fix flapping MemoryManagerTest.testWaitForMemoryAvailable 
unit test
 Key: PHOENIX-2840
 URL: https://issues.apache.org/jira/browse/PHOENIX-2840
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: churro morales


Looks like MemoryManagerTest.testWaitForMemoryAvailable is flapping.
{code}
https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/443/testReport/junit/org.apache.phoenix.memory/MemoryManagerTest/testWaitForMemoryAvailable/
{code}

I wonder if perhaps we should change reuseForks to false for our fast unit 
tests here in the pom.xml:
{code}
  
org.apache.maven.plugins
maven-surefire-plugin
${maven-surefire-plugin.version}

  ${numForkedUT}
  true
  -enableassertions -Xmx2250m -XX:MaxPermSize=128m
-Djava.security.egd=file:/dev/./urandom 
"-Djava.library.path=${hadoop.library.path}${path.separator}${java.library.path}"
  
${test.output.tofile}
  kill

  
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (PHOENIX-1311) HBase namespaces surfaced in phoenix

2016-04-18 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reopened PHOENIX-1311:
---

Looks like some test failures perhaps related to your check-in. Please 
investigate:
{code}
https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1120/
{code}

> HBase namespaces surfaced in phoenix
> 
>
> Key: PHOENIX-1311
> URL: https://issues.apache.org/jira/browse/PHOENIX-1311
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: nicolas maillard
>Assignee: Ankit Singhal
>Priority: Minor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-1311.docx, PHOENIX-1311_v1.patch, 
> PHOENIX-1311_v2.patch, PHOENIX-1311_v3_rebased.patch, 
> PHOENIX-1311_v3_rebased_0.98.patch, PHOENIX-1311_v3_rebased_1.0.patch, 
> PHOENIX-1311_wip.patch, PHOENIX-1311_wip_2.patch
>
>
> Hbase (HBASE-8015) has the concept of namespaces in the form of 
> myNamespace:MyTable it would be great if Phoenix leveraged this feature to 
> give a database like feature on top of the table.
> Maybe to stay close to Hbase it could also be a create DB:Table...
> or DB.Table which is a more standard annotation?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2743) HivePhoenixHandler for big-big join with predicate push down

2016-04-18 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15246084#comment-15246084
 ] 

Josh Elser commented on PHOENIX-2743:
-

{noformat}
java.lang.RuntimeException: java.lang.NoSuchMethodError: 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.constructSecretProvider(Ljavax/servlet/ServletContext;Ljava/util/Properties;Z)Lorg/apache/hadoop/security/authentication/util/SignerSecretProvider;
Caused by: java.lang.NoSuchMethodError: 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.constructSecretProvider(Ljavax/servlet/ServletContext;Ljava/util/Properties;Z)Lorg/apache/hadoop/security/authentication/util/SignerSecretProvider;
{noformat}

Did you have all of the tests successfully running, [~sergey.soldatov]? It 
seems like there might be some lingering Hadoop version change issues. I'll 
look into it, but I just wanted to let you know :)

> HivePhoenixHandler for big-big join with predicate push down
> 
>
> Key: PHOENIX-2743
> URL: https://issues.apache.org/jira/browse/PHOENIX-2743
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.5.0, 4.6.0
> Environment: hive-1.2.1
>Reporter: JeongMin Ju
>  Labels: features, performance
> Attachments: PHOENIX-2743-1.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Phoenix support hash join & sort-merge join. But in case of big*big join does 
> not process well.
> Therefore Need other method like Hive.
> I implemented hive-phoenix-handler that can access Apache Phoenix table on 
> HBase using HiveQL.
> hive-phoenix-handler is very faster than hive-hbase-handler because of 
> applying predicate push down.
> I am publishing source code to github for contribution and maybe will be 
> completed by next week.
> https://github.com/mini666/hive-phoenix-handler
> please, review my proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2840) Fix flapping MemoryManagerTest.testWaitForMemoryAvailable unit test

2016-04-18 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2840:
--
Fix Version/s: 4.8.0

> Fix flapping MemoryManagerTest.testWaitForMemoryAvailable unit test
> ---
>
> Key: PHOENIX-2840
> URL: https://issues.apache.org/jira/browse/PHOENIX-2840
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: churro morales
> Fix For: 4.8.0
>
>
> Looks like MemoryManagerTest.testWaitForMemoryAvailable is flapping.
> {code}
> https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/443/testReport/junit/org.apache.phoenix.memory/MemoryManagerTest/testWaitForMemoryAvailable/
> {code}
> I wonder if perhaps we should change reuseForks to false for our fast unit 
> tests here in the pom.xml:
> {code}
>   
> org.apache.maven.plugins
> maven-surefire-plugin
> ${maven-surefire-plugin.version}
> 
>   ${numForkedUT}
>   true
>   -enableassertions -Xmx2250m -XX:MaxPermSize=128m
> -Djava.security.egd=file:/dev/./urandom 
> "-Djava.library.path=${hadoop.library.path}${path.separator}${java.library.path}"
>   
> ${test.output.tofile}
>   kill
> 
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2743) HivePhoenixHandler for big-big join with predicate push down

2016-04-18 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15246077#comment-15246077
 ] 

Sergey Soldatov commented on PHOENIX-2743:
--

{quote}
. I would assume that it's not necessary for any of them, but let me know if 
I'm wrong
{quote}
Actually I added that only for the packages that are not in the  dependencies 
in the root {{pom.xml}}. Not sure whether it's good to include them there since 
they are used in this package only and the module has it's own artifact. If we 
want to make it as a part of {{phoenix-client.jar}}, those dependencies can be 
moved to the root {{pom.xml}} and specify the version there. 



> HivePhoenixHandler for big-big join with predicate push down
> 
>
> Key: PHOENIX-2743
> URL: https://issues.apache.org/jira/browse/PHOENIX-2743
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.5.0, 4.6.0
> Environment: hive-1.2.1
>Reporter: JeongMin Ju
>  Labels: features, performance
> Attachments: PHOENIX-2743-1.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Phoenix support hash join & sort-merge join. But in case of big*big join does 
> not process well.
> Therefore Need other method like Hive.
> I implemented hive-phoenix-handler that can access Apache Phoenix table on 
> HBase using HiveQL.
> hive-phoenix-handler is very faster than hive-hbase-handler because of 
> applying predicate push down.
> I am publishing source code to github for contribution and maybe will be 
> completed by next week.
> https://github.com/mini666/hive-phoenix-handler
> please, review my proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2628) Ensure split when iterating through results handled correctly

2016-04-18 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15246056#comment-15246056
 ] 

James Taylor commented on PHOENIX-2628:
---

[~rajeshbabu] - looks like you broke a number of unit tests. Please fix:
{code}
Test Result (22 failures / +21)
org.apache.phoenix.end2end.QueryDatabaseMetaDataIT.testSchemaMetadataScan
org.apache.phoenix.end2end.SaltedViewIT.testSaltedUpdatableViewWithLocalIndex[transactional
 = false]
org.apache.phoenix.end2end.SaltedViewIT.testSaltedUpdatableViewWithLocalIndex[transactional
 = true]
org.apache.phoenix.end2end.SubqueryIT.testAnyAllComparisonSubquery[0]
org.apache.phoenix.end2end.SubqueryIT.testAnyAllComparisonSubquery[1]
org.apache.phoenix.end2end.SubqueryIT.testAnyAllComparisonSubquery[2]
org.apache.phoenix.end2end.SubqueryUsingSortMergeJoinIT.testAnyAllComparisonSubquery[0]
org.apache.phoenix.end2end.SubqueryUsingSortMergeJoinIT.testAnyAllComparisonSubquery[1]
org.apache.phoenix.end2end.SubqueryUsingSortMergeJoinIT.testAnyAllComparisonSubquery[2]
org.apache.phoenix.end2end.ViewIT.testNonSaltedUpdatableViewWithLocalIndex[transactional
 = false]
org.apache.phoenix.end2end.ViewIT.testNonSaltedUpdatableViewWithLocalIndex[transactional
 = true]
org.apache.phoenix.end2end.index.LocalIndexIT.testLocalIndexScanAfterRegionSplit[isNamespaceMapped
 = false]
org.apache.phoenix.end2end.index.MutableIndexIT.testSplitDuringIndexScan[localIndex
 = true , transactional = false]
org.apache.phoenix.end2end.index.MutableIndexIT.testSplitDuringIndexReverseScan[localIndex
 = true , transactional = false]
org.apache.phoenix.end2end.index.MutableIndexIT.testCoveredColumns[localIndex = 
true , transactional = false]
org.apache.phoenix.end2end.index.MutableIndexIT.testCoveredColumnUpdates[localIndex
 = true , transactional = false]
org.apache.phoenix.end2end.index.MutableIndexIT.testIndexHalfStoreFileReader[localIndex
 = true , transactional = false]
org.apache.phoenix.end2end.index.MutableIndexIT.testSplitDuringIndexScan[localIndex
 = true , transactional = true]
org.apache.phoenix.end2end.index.MutableIndexIT.testSplitDuringIndexReverseScan[localIndex
 = true , transactional = true]
org.apache.phoenix.end2end.index.MutableIndexIT.testCoveredColumns[localIndex = 
true , transactional = true]
org.apache.phoenix.end2end.index.MutableIndexIT.testCoveredColumnUpdates[localIndex
 = true , transactional = true]
org.apache.phoenix.end2end.index.MutableIndexIT.testIndexHalfStoreFileReader[localIndex
 = true , transactional = true]
{code}

> Ensure split when iterating through results handled correctly
> -
>
> Key: PHOENIX-2628
> URL: https://issues.apache.org/jira/browse/PHOENIX-2628
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2628-wip.patch, PHOENIX-2628.patch, 
> PHOENIX-2628_v7.patch, PHOENIX-2628_v8.patch, PHOENIX-2628_v9.patch
>
>
> We should start with a test case to ensure this works correctly, both for 
> scans and aggregates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2839) support for hbase 1.2 codebase

2016-04-18 Thread Davide Gesino (JIRA)
Davide Gesino created PHOENIX-2839:
--

 Summary: support for hbase 1.2 codebase
 Key: PHOENIX-2839
 URL: https://issues.apache.org/jira/browse/PHOENIX-2839
 Project: Phoenix
  Issue Type: Improvement
Reporter: Davide Gesino
Priority: Critical


It would be great if phoenix will support anytime in the future Hbase release 
1.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2743) HivePhoenixHandler for big-big join with predicate push down

2016-04-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15245935#comment-15245935
 ] 

ASF GitHub Bot commented on PHOENIX-2743:
-

Github user joshelser commented on the pull request:

https://github.com/apache/phoenix/pull/155#issuecomment-211454505
  
@ss77892 one more nit-pick, it seems like you removed the version for some 
of the entries in the pom but then added `${hadoop-two.version}` for others. I 
would assume that it's not necessary for any of them, but let me know if I'm 
wrong. I can also just fix this while merging it in as long as @JamesRTaylor 
thinks this good to go too :)


> HivePhoenixHandler for big-big join with predicate push down
> 
>
> Key: PHOENIX-2743
> URL: https://issues.apache.org/jira/browse/PHOENIX-2743
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.5.0, 4.6.0
> Environment: hive-1.2.1
>Reporter: JeongMin Ju
>  Labels: features, performance
> Attachments: PHOENIX-2743-1.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Phoenix support hash join & sort-merge join. But in case of big*big join does 
> not process well.
> Therefore Need other method like Hive.
> I implemented hive-phoenix-handler that can access Apache Phoenix table on 
> HBase using HiveQL.
> hive-phoenix-handler is very faster than hive-hbase-handler because of 
> applying predicate push down.
> I am publishing source code to github for contribution and maybe will be 
> completed by next week.
> https://github.com/mini666/hive-phoenix-handler
> please, review my proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-2743 Hive Storage support

2016-04-18 Thread joshelser
Github user joshelser commented on the pull request:

https://github.com/apache/phoenix/pull/155#issuecomment-211454505
  
@ss77892 one more nit-pick, it seems like you removed the version for some 
of the entries in the pom but then added `${hadoop-two.version}` for others. I 
would assume that it's not necessary for any of them, but let me know if I'm 
wrong. I can also just fix this while merging it in as long as @JamesRTaylor 
thinks this good to go too :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-2743) HivePhoenixHandler for big-big join with predicate push down

2016-04-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15245945#comment-15245945
 ] 

ASF GitHub Bot commented on PHOENIX-2743:
-

Github user JamesRTaylor commented on the pull request:

https://github.com/apache/phoenix/pull/155#issuecomment-211456028
  
It's in your most capable hands, @joshelser & @SergeySoldatov. Thanks so 
much for whipping this into shape!


> HivePhoenixHandler for big-big join with predicate push down
> 
>
> Key: PHOENIX-2743
> URL: https://issues.apache.org/jira/browse/PHOENIX-2743
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.5.0, 4.6.0
> Environment: hive-1.2.1
>Reporter: JeongMin Ju
>  Labels: features, performance
> Attachments: PHOENIX-2743-1.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Phoenix support hash join & sort-merge join. But in case of big*big join does 
> not process well.
> Therefore Need other method like Hive.
> I implemented hive-phoenix-handler that can access Apache Phoenix table on 
> HBase using HiveQL.
> hive-phoenix-handler is very faster than hive-hbase-handler because of 
> applying predicate push down.
> I am publishing source code to github for contribution and maybe will be 
> completed by next week.
> https://github.com/mini666/hive-phoenix-handler
> please, review my proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-2743 Hive Storage support

2016-04-18 Thread JamesRTaylor
Github user JamesRTaylor commented on the pull request:

https://github.com/apache/phoenix/pull/155#issuecomment-211456028
  
It's in your most capable hands, @joshelser & @SergeySoldatov. Thanks so 
much for whipping this into shape!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (PHOENIX-2838) Support "case insensitive" identifier for drop schema and use schema constructs

2016-04-18 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-2838:
--

 Summary: Support "case insensitive" identifier for drop schema and 
use schema constructs
 Key: PHOENIX-2838
 URL: https://issues.apache.org/jira/browse/PHOENIX-2838
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Ankit Singhal
Assignee: Ankit Singhal






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2628) Ensure split when iterating through results handled correctly

2016-04-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15245672#comment-15245672
 ] 

Hudson commented on PHOENIX-2628:
-

FAILURE: Integrated in Phoenix-master #1194 (See 
[https://builds.apache.org/job/Phoenix-master/1194/])
PHOENIX-2628 Ensure split when iterating through results handled (rajeshbabu: 
rev a31b70179cbc468cc3fe890bd615fc71e0bac5a2)
* 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIteratorFactory.java
* phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
* 
phoenix-core/src/it/java/org/apache/phoenix/iterate/MockParallelIteratorFactory.java
* phoenix-core/src/main/java/org/apache/phoenix/execute/ClientAggregatePlan.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
* phoenix-core/src/main/java/org/apache/phoenix/execute/AggregatePlan.java
* phoenix-core/src/main/java/org/apache/phoenix/execute/UnnestArrayPlan.java
* 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexStoreFileScanner.java
* 
phoenix-core/src/test/java/org/apache/phoenix/query/ParallelIteratorsSplitTest.java
* 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
* phoenix-core/src/main/java/org/apache/phoenix/iterate/SerialIterators.java
* phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
* phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java
* phoenix-core/src/main/java/org/apache/phoenix/compile/QueryPlan.java
* phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
* phoenix-core/src/main/java/org/apache/phoenix/execute/ClientScanPlan.java
* 
phoenix-core/src/main/java/org/apache/phoenix/iterate/DefaultTableResultIteratorFactory.java
* 
phoenix-core/src/it/java/org/apache/phoenix/iterate/DelayedTableResultIteratorFactory.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
* phoenix-core/src/main/java/org/apache/phoenix/execute/CorrelatePlan.java
* 
phoenix-core/src/main/java/org/apache/phoenix/iterate/ParallelIteratorFactory.java
* phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
* phoenix-core/src/main/java/org/apache/phoenix/util/ScanUtil.java
* phoenix-core/src/main/java/org/apache/phoenix/compile/TraceQueryPlan.java
* phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIterator.java
* 
phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java
* 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixRecordReader.java
* 
phoenix-core/src/main/java/org/apache/phoenix/iterate/SpoolingResultIterator.java
* phoenix-core/src/main/java/org/apache/phoenix/execute/SortMergeJoinPlan.java
* 
phoenix-core/src/main/java/org/apache/phoenix/compile/MutatingParallelIteratorFactory.java
* phoenix-core/src/main/java/org/apache/phoenix/execute/HashJoinPlan.java
* phoenix-core/src/main/java/org/apache/phoenix/execute/TupleProjectionPlan.java
* phoenix-core/src/main/java/org/apache/phoenix/execute/DegenerateQueryPlan.java
* phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
* 
phoenix-core/src/main/java/org/apache/phoenix/execute/LiteralResultIterationPlan.java
* phoenix-core/src/main/java/org/apache/phoenix/iterate/ParallelIterators.java
* phoenix-core/src/main/java/org/apache/phoenix/compile/ListJarsQueryPlan.java
* phoenix-core/src/main/java/org/apache/phoenix/execute/UnionPlan.java
* 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexSplitter.java
* 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java


> Ensure split when iterating through results handled correctly
> -
>
> Key: PHOENIX-2628
> URL: https://issues.apache.org/jira/browse/PHOENIX-2628
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2628-wip.patch, PHOENIX-2628.patch, 
> PHOENIX-2628_v7.patch, PHOENIX-2628_v8.patch, PHOENIX-2628_v9.patch
>
>
> We should start with a test case to ensure this works correctly, both for 
> scans and aggregates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2628) Ensure split when iterating through results handled correctly

2016-04-18 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-2628:
-
Attachment: PHOENIX-2628_v9.patch

Here is the rebase patch after handling review comments from pull request added 
some fixes in LocalIndexStoreFileScanner. Going to commit it.

> Ensure split when iterating through results handled correctly
> -
>
> Key: PHOENIX-2628
> URL: https://issues.apache.org/jira/browse/PHOENIX-2628
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2628-wip.patch, PHOENIX-2628.patch, 
> PHOENIX-2628_v7.patch, PHOENIX-2628_v8.patch, PHOENIX-2628_v9.patch
>
>
> We should start with a test case to ensure this works correctly, both for 
> scans and aggregates.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Phoenix for CDH5 compatibility

2016-04-18 Thread Swapna Swapna
Add the phoenix-[version]-server.jar to the classpath of all HBase region
server and master and remove any previous version. An easy way to do this
is to copy it into the HBase lib directory.

On Sun, Apr 17, 2016 at 1:20 AM, Viswanathan J 
wrote:

> What are the required phoenix jars to be copied into HBase libs.
> On Apr 17, 2016 1:17 PM, "Swapna Swapna"  wrote:
>
>> Thank you Rafa and James. That helps.
>>
>> Regards
>> Swapna
>>
>> On Sat, Apr 16, 2016 at 7:41 AM, James Taylor 
>> wrote:
>>
>>> There's also the branch that Andrew setup here:
>>> https://github.com/chiastic-security/phoenix-for-cloudera
>>>
>>>
>>> On Saturday, April 16, 2016, rafa  wrote:
>>>
 Hi Swapna,

 You can download the  official parcel from Cloudera, although it is not
 the last phoenix version.

 http://archive.cloudera.com/cloudera-labs/phoenix/parcels/latest/

 If you want to use higher versions you'll have to compile them against
 the cdh  libraries

 Regards,
 Rafa
 El 16/04/2016 12:17, "Swapna Swapna"  escribió:

> Hi,
>
>
> I was able to use phoenix-4.6.0-HBase-1 in a standalone machine but
> when i tried to use with cdh5, its throwing the below exception. After
> doing some research, noticed this seems to be a common problem faced by
> many others as well.
>
> Caused by: java.lang.NoSuchMethodError:
> org.apache.hadoop.hbase.client.Scan.setRaw(Z)Lorg/apache/hadoop/hbase/client/Scan;
>
> at
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildDeletedTable(MetaDataEndpointImpl.java:973)
>
> at
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1049)
>
> at
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1223)
>
> at
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1199)
>
> at
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
>
> at
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300)
>
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:31913)
>
> at
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1605)
>
>
> Please suggest me, from where I can download the Phoenix binary/source
> (for hbase 1.0) to be compatible with cdh5.
>
> Regards
>
> Swapna
>
>
>>