Re: [ANNOUNCE] Phoenix 4.7 now supported in Amazon EMR

2016-06-09 Thread Heather, James (ELS)
That's really excellent news! Well done for persuading them! James On 10 Jun 2016, at 00:16, James Taylor mailto:jamestay...@apache.org>> wrote: Thanks to some great work over at Amazon, there's now support for Phoenix 4.7 on top of HBase 1.2 in Amazon EMR. Check it out and give it a spin. Det

Re: phoenix on non-apache hbase

2016-06-09 Thread Ankur Jain
I have updated my jira with updated instructions https://issues.apache.org/jira/browse/PHOENIX-2834. Please do let me know if you are able to build and use with CDH5.7 Thanks, Ankur Jain From: Andrew Purtell mailto:andrew.purt...@gmail.com>> Reply-To: "user@phoenix.apache.org

Re: phoenix on non-apache hbase

2016-06-09 Thread Andrew Purtell
Yes a stock client should work with a server modified for CDH assuming both client and server versions are within the bounds specified by the backwards compatibility policy (https://phoenix.apache.org/upgrading.html) "Phoenix maintains backward compatibility across at least two minor releases to

Re: phoenix on non-apache hbase

2016-06-09 Thread Andrew Purtell
Pick the tree for your CDH 5.x version and it should all work. We are missing trees for X=6 and X=7 and I will aim to get to that soon. I did not test beyond insuring all Phoenix unit and integration tests passed. > On Jun 9, 2016, at 7:55 PM, Benjamin Kim wrote: > > Andrew, > > Since we a

Re: phoenix on non-apache hbase

2016-06-09 Thread Koert Kuipers
is phoenix client also affect by this? or does phoenix server isolate the client? is it reasonable to expect a "stock" phoenix client to work against a custom phoenix server for cdh 5.x? (with of course the phoenix client and server having same phoenix version). On Thu, Jun 9, 2016 at 10:55 PM,

Re: phoenix on non-apache hbase

2016-06-09 Thread Benjamin Kim
Andrew, Since we are still on CDH 5.5.2, can I just use your custom version? Phoenix is one of the reasons that we are blocked from upgrading to CDH 5.7.1. Thus, CDH 5.7.1 is only on our test cluster. One of our developers wants to try out the Phoenix Spark plugin. Did you try it out in yours t

Re: phoenix on non-apache hbase

2016-06-09 Thread Andrew Purtell
> is cloudera's hbase 1.2.0-cdh5.7.0 that different from apache HBase 1.2.0? Yes As is the Cloudera HBase in 5.6, 5.5, 5.4, ... quite different from Apache HBase in coprocessor and RPC internal extension APIs. We have made some ports of Apache Phoenix releases to CDH here: https://github.com

Re: phoenix on non-apache hbase

2016-06-09 Thread Benjamin Kim
This interests me too. I asked Cloudera in their community forums a while back but got no answer on this. I hope they don’t leave us out in the cold. I tried building it too before with the instructions here https://issues.apache.org/jira/browse/PHOENIX-2834. I could get it to build, but I coul

Re: linkage error using Groovy

2016-06-09 Thread Josh Elser
FWIW, I've also reproduced this with Groovy 2.4.3, Oracle Java 1.7.0_79 and Apache Phoenix 4.8.0-SNAPSHOT locally. Will dig some more. Brian Jeltema wrote: Groovy 2.4.3 JDK 1.8 On Jun 8, 2016, at 11:26 AM, Josh Elser mailto:josh.el...@gmail.com>> wrote: Thanks for the info, Brian! What ver

Re: phoenix on non-apache hbase

2016-06-09 Thread Josh Elser
Koert, Apache Phoenix goes through a lot of work to provide multiple versions of Phoenix for various versions of Apache HBase (0.98, 1.1, and 1.2 presently). The builds for each of these branches are tested against those specific versions of HBase, so I doubt that there are issues between Apa

[ANNOUNCE] Phoenix 4.7 now supported in Amazon EMR

2016-06-09 Thread James Taylor
Thanks to some great work over at Amazon, there's now support for Phoenix 4.7 on top of HBase 1.2 in Amazon EMR. Check it out and give it a spin. Detailed step-by-step instructions available here: http://docs.aws.amazon.com/ElasticMapReduce/latest/ReleaseGuide/emr-phoenix.html Thanks, James

Re: phoenix spark options not supporint query in dbtable

2016-06-09 Thread Josh Mahonin
They're effectively the same code paths. However, I'd recommend using the Data Frame API unless you have a specific need to pass in a custom Configuration object. The Data Frame API has bindings in Scala, Java and Python, so that's another advantage. The phoenix-spark docs have a PySpark example,

RE: phoenix spark options not supporint query in dbtable

2016-06-09 Thread Long, Xindian
Hi, Josh: Thanks for the answer. Do you know the underlining difference between the following two ways of Loading a Dataframe? (using the Data Source API, or Load as a DataFrame directly using a Configuration object) Is there a Java interface to use the functionality of phoenixTableAsDataFrame

Re: Table replication

2016-06-09 Thread James Taylor
Hi JM, Are you looking toward replication to support DR? If so, you can rely on HBase-level replication with a few gotchas and some operational hurdles: - When upgrading Phoenix versions, upgrade the server-side first for both the primary and secondary cluster. You can do a rolling upgrade and old

Re: Table replication

2016-06-09 Thread anil gupta
Hi Jean, Phoenix does not supports replication at present.(It will be super awesome if it can) So, if you want to do replication of Phoenix tables you will need to setup replication of all the underlying HBase tables for corresponding Phoenix tables. I think you will need to replicate all the Pho

Table replication

2016-06-09 Thread Jean-Marc Spaggiari
Hi, When Phoenix is used, what is the recommended way to do replication? Replication acts as a client on the 2nd cluster, so should we simply configure Phoenix on both cluster and on the destination it will take care of updating the index tables, etc. Or should all the tables on the destination s

Re: phoenix spark options not supporint query in dbtable

2016-06-09 Thread Josh Mahonin
Hi Xindian, The phoenix-spark integration is based on the Phoenix MapReduce layer, which doesn't support aggregate functions. However, as you mentioned, both filtering and pruning predicates are pushed down to Phoenix. With an RDD or DataFrame loaded, all of Spark's various aggregation methods are