Thanks for the info, Stephen. The more companies who let them know, the
more likely it will be to happen. It would probably just take one company
to switch to a different vendor's distro over this issue to push them over
the edge.

Cloudera has done a pretty good job of keeping the labs version up to date,
though. They started with support for Phoenix 4.3 almost 10 months ago in
CDH 5.3 and have now updated to Phoenix 4.5 two months back in CDH 5.4.
There's also the much appreciated work that Andrew Purtell has done to
personally provide newer versions of Phoenix on top of CDH. Jean-Marc has
been helping out as well.

@Benjamin - I don't agree with your analysis:
> 1. No activity on a new port of Phoenix 4.6 or 4.7...
4.7 is not even released yet and Cloudera just put out an update two months
back. Hopefully the next update will be on 4.7 so it can pick up
transaction support and lots of bug fixes.

> 2. In the Cloudera Community groups, I got no reply...
That's kind of what "no support" means, but come here, contribute your own
knowledge and patches - that's what an Apache community is all about - is
shouldn't be a one way street.

> 3. In the Spark Users groups, there’s active discussion about the Spark
on HBase module
That's good - more tools in the toolbox. Will they support secondary
indexes, transactions, parallel scans, statistics collections. time-series
optimization, skip scans, SQL data model, cost-based optimization,
multi-tenancy, etc?

Phoenix is not competing with Spark or any Cloudera products either. It's
very complimentary IMHO.

Regards,
James


On Sun, Feb 21, 2016 at 1:37 PM, Stephen Wilcoxon <wilco...@gmail.com>
wrote:

> The company I work for is a major Cloudera customer and we told Cloudera
> we were interested in Phoenix becoming an official release (and
> up-to-date).  If that's enough to make it happen, I have no idea...
>
> Cloudera's management additions are great (or so I'm told - I'm not much
> on the operations side) but their mucking about with code enough that
> official releases (not specific to Cloudera) won't work out-of-the-box is
> definitely a bummer.
>
> On Sun, Feb 21, 2016 at 1:41 PM, Dor Ben Dov <dor.ben-...@amdocs.com>
> wrote:
>
>> Ben,
>>
>> Thanks for the answer.
>>
>> Dor
>>
>>
>>
>> *From:* Benjamin Kim [mailto:bbuil...@gmail.com]
>> *Sent:* יום א 21 פברואר 2016 21:36
>>
>> *To:* user@phoenix.apache.org
>> *Subject:* Re: Cloudera and Phoenix
>>
>>
>>
>> I don’t know if Cloudera will support Phoenix going forward. There are a
>> few things that lead me into thinking this.
>>
>>    1. No activity on a new port of Phoenix 4.6 or 4.7 in Cloudera Labs,
>>    as mentioned below
>>    2. In the Cloudera Community groups, I got no reply to my question
>>    about help compiling Phoenix 4.7 for CDH
>>    3. In the Spark Users groups, there’s active discussion about the
>>    Spark on HBase module that was developed by Cloudera and that it will be
>>    out in early summer.
>>
>>
>>    1.
>>       
>> http://blog.cloudera.com/blog/2015/08/apache-spark-comes-to-apache-hbase-with-hbase-spark-module/
>>
>>
>>
>> My bet is that Cloudera is going with the Spark solution since it’s their
>> baby, and it can natviely work with HBase table directly. So, this would
>> mean that Phoenix is a no-go for CDH going forward? I hope not.
>>
>>
>>
>> Cheers,
>>
>> Ben
>>
>>
>>
>>
>>
>> On Feb 21, 2016, at 11:15 AM, James Taylor <jamestay...@apache.org>
>> wrote:
>>
>>
>>
>> Hi Dor,
>>
>>
>>
>> Whether or not Phoenix becomes part of CDH is not under our control. It
>> *is* under your control, though (assuming you're a customer of CDH). The
>> *only* way Phoenix will transition from being in Cloudera Labs to being
>> part of the official CDH distro is if you and other customers demand it.
>>
>>
>>
>> Thanks,
>>
>> James
>>
>>
>>
>> On Sun, Feb 21, 2016 at 10:03 AM, Dor Ben Dov <dor.ben-...@amdocs.com>
>> wrote:
>>
>> Stephen
>>
>>
>>
>> Any plans or do you or anyone where see the possibility that it will be
>> although all below as official release ?
>>
>>
>>
>> Dor
>>
>>
>>
>> *From:* Stephen Wilcoxon [mailto:wilco...@gmail.com]
>> *Sent:* יום א 21 פברואר 2016 19:37
>> *To:* user@phoenix.apache.org
>> *Subject:* Re: Cloudera and Phoenix
>>
>>
>>
>> As of a few months ago, Cloudera includes Phoenix as a "lab" (basically
>> beta) but it was out-of-date.  From what I gather, the official Phoenix
>> releases will not run on Cloudera without modifications (someone was doing
>> unofficial Phoenix/Cloudera releases but I'm not sure if they still are or
>> not).
>>
>>
>>
>> On Sun, Feb 21, 2016 at 6:39 AM, Dor Ben Dov <dor.ben-...@amdocs.com>
>> wrote:
>>
>> Hi All,
>>
>>
>>
>> Do we have Phoenix release officially in Cloudera ? any plan to if not ?
>>
>>
>>
>> Regards,
>>
>>
>>
>> Dor ben Dov
>>
>>
>>
>> *From:* Benjamin Kim [mailto:bbuil...@gmail.com]
>> *Sent:* יום ו 19 פברואר 2016 19:41
>> *To:* user@phoenix.apache.org
>> *Subject:* Re: Spark Phoenix Plugin
>>
>>
>>
>> All,
>>
>>
>>
>> Thanks for the help. I have switched out Cloudera’s HBase 1.0.0 with the
>> current Apache HBase 1.1.3. Also, I installed Phoenix 4.7.0, and everything
>> works fine except for the Phoenix Spark Plugin. I wonder if it’s a version
>> incompatibility issue with Spark 1.6. Has anyone tried compiling 4.7.0
>> using Spark 1.6?
>>
>>
>>
>> Thanks,
>>
>> Ben
>>
>>
>>
>> On Feb 12, 2016, at 6:33 AM, Benjamin Kim <bbuil...@gmail.com> wrote:
>>
>>
>>
>> Anyone know when Phoenix 4.7 will be officially released? And what
>> Cloudera distribution versions will it be compatible with?
>>
>>
>>
>> Thanks,
>>
>> Ben
>>
>>
>>
>> On Feb 10, 2016, at 11:03 AM, Benjamin Kim <bbuil...@gmail.com> wrote:
>>
>>
>>
>> Hi Pierre,
>>
>>
>>
>> I am getting this error now.
>>
>>
>>
>> Error: org.apache.phoenix.exception.PhoenixIOException:
>> org.apache.hadoop.hbase.DoNotRetryIOException:
>> SYSTEM.CATALOG,,1453397732623.8af7b44f3d7609eb301ad98641ff2611.:
>> org.apache.hadoop.hbase.client.Delete.setAttribute(Ljava/lang/String;[B)Lorg/apache/hadoop/hbase/client/Delete;
>>
>>
>>
>> I even tried to use sqlline.py to do some queries too. It resulted in the
>> same error. I followed the installation instructions. Is there something
>> missing?
>>
>>
>>
>> Thanks,
>>
>> Ben
>>
>>
>>
>>
>>
>> On Feb 9, 2016, at 10:20 AM, Ravi Kiran <maghamraviki...@gmail.com>
>> wrote:
>>
>>
>>
>> Hi Pierre,
>>
>>
>>
>>   Try your luck for building the artifacts from
>> https://github.com/chiastic-security/phoenix-for-cloudera. Hopefully it
>> helps.
>>
>>
>>
>> Regards
>>
>> Ravi .
>>
>>
>>
>> On Tue, Feb 9, 2016 at 10:04 AM, Benjamin Kim <bbuil...@gmail.com> wrote:
>>
>> Hi Pierre,
>>
>>
>>
>> I found this article about how Cloudera’s version of HBase is very
>> different than Apache HBase so it must be compiled using Cloudera’s repo
>> and versions. But, I’m not having any success with it.
>>
>>
>>
>>
>> http://stackoverflow.com/questions/31849454/using-phoenix-with-cloudera-hbase-installed-from-repo
>>
>>
>>
>> There’s also a Chinese site that does the same thing.
>>
>>
>>
>> https://www.zybuluo.com/xtccc/note/205739
>>
>>
>>
>> I keep getting errors like the one’s below.
>>
>>
>>
>> [ERROR]
>> /opt/tools/phoenix/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/LocalIndexMerger.java:[110,29]
>> cannot find symbol
>>
>> [ERROR] symbol:   class Region
>>
>> [ERROR] location: class
>> org.apache.hadoop.hbase.regionserver.LocalIndexMerger
>>
>> …
>>
>>
>>
>> Have you tried this also?
>>
>>
>>
>> As a last resort, we will have to abandon Cloudera’s HBase for Apache’s
>> HBase.
>>
>>
>>
>> Thanks,
>>
>> Ben
>>
>>
>>
>>
>>
>> On Feb 8, 2016, at 11:04 PM, pierre lacave <pie...@lacave.me> wrote:
>>
>>
>>
>> Havent met that one.
>>
>> According to SPARK-1867, the real issue is hidden.
>>
>> I d process by elimination, maybe try in local[*] mode first
>>
>> https://issues.apache.org/jira/plugins/servlet/mobile#issue/SPARK-1867
>>
>>
>>
>> On Tue, 9 Feb 2016, 04:58 Benjamin Kim <bbuil...@gmail.com> wrote:
>>
>> Pierre,
>>
>>
>>
>> I got it to work using phoenix-4.7.0-HBase-1.0-client-spark.jar. But,
>> now, I get this error:
>>
>>
>>
>> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
>> in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage
>> 0.0 (TID 3, prod-dc1-datanode151.pdc1i.gradientx.com):
>> java.lang.IllegalStateException: unread block data
>>
>>
>>
>> It happens when I do:
>>
>>
>>
>> df.show()
>>
>>
>>
>> Getting closer…
>>
>>
>>
>> Thanks,
>>
>> Ben
>>
>>
>>
>>
>>
>>
>>
>> On Feb 8, 2016, at 2:57 PM, pierre lacave <pie...@lacave.me> wrote:
>>
>>
>>
>> This is the wrong client jar try with the one named
>> phoenix-4.7.0-HBase-1.1-client-spark.jar
>>
>>
>>
>> On Mon, 8 Feb 2016, 22:29 Benjamin Kim <bbuil...@gmail.com> wrote:
>>
>> Hi Josh,
>>
>>
>>
>> I tried again by putting the settings within the spark-default.conf.
>>
>>
>>
>>
>> spark.driver.extraClassPath=/opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar
>>
>>
>> spark.executor.extraClassPath=/opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar
>>
>>
>>
>> I still get the same error using the code below.
>>
>>
>>
>> import org.apache.phoenix.spark._
>>
>> val df = sqlContext.load("org.apache.phoenix.spark", Map("table" ->
>> "TEST.MY_TEST", "zkUrl" -> “zk1,zk2,zk3:2181"))
>>
>>
>>
>> Can you tell me what else you’re doing?
>>
>>
>>
>> Thanks,
>>
>> Ben
>>
>>
>>
>>
>>
>> On Feb 8, 2016, at 1:44 PM, Josh Mahonin <jmaho...@gmail.com> wrote:
>>
>>
>>
>> Hi Ben,
>>
>>
>>
>> I'm not sure about the format of those command line options you're
>> passing. I've had success with spark-shell just by setting the
>> 'spark.executor.extraClassPath' and 'spark.driver.extraClassPath' options
>> on the spark config, as per the docs [1].
>>
>>
>>
>> I'm not sure if there's anything special needed for CDH or not though. I
>> also have a docker image I've been toying with which has a working
>> Spark/Phoenix setup using the Phoenix 4.7.0 RC and Spark 1.6.0. It might be
>> a useful reference for you as well [2].
>>
>> Good luck,
>>
>>
>>
>> Josh
>>
>> [1] https://phoenix.apache.org/phoenix_spark.html
>> [2] https://github.com/jmahonin/docker-phoenix/tree/phoenix_spark
>>
>>
>>
>> On Mon, Feb 8, 2016 at 4:29 PM, Benjamin Kim <bbuil...@gmail.com> wrote:
>>
>> Hi Pierre,
>>
>>
>>
>> I tried to run in spark-shell using spark 1.6.0 by running this:
>>
>>
>>
>> spark-shell --master yarn-client --driver-class-path
>> /opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar --driver-java-options
>> "-Dspark.executor.extraClassPath=/opt/tools/phoenix/phoenix-4.7.0-HBase-1.0-client.jar”
>>
>>
>>
>> The version of HBase is the one in CDH5.4.8, which is 1.0.0-cdh5.4.8.
>>
>>
>>
>> When I get to the line:
>>
>>
>>
>> val df = sqlContext.load("org.apache.phoenix.spark", Map("table" ->
>> “TEST.MY_TEST", "zkUrl" -> “zk1,zk2,zk3:2181”))
>>
>>
>>
>> I get this error:
>>
>>
>>
>> java.lang.NoClassDefFoundError: Could not initialize class
>> org.apache.spark.rdd.RDDOperationScope$
>>
>>
>>
>> Any ideas?
>>
>>
>>
>> Thanks,
>>
>> Ben
>>
>>
>>
>>
>>
>> On Feb 5, 2016, at 1:36 PM, pierre lacave <pie...@lacave.me> wrote:
>>
>>
>>
>> I don't know when the full release will be, RC1 just got pulled out, and
>> expecting RC2 soon
>>
>>
>>
>> you can find them here
>>
>>
>>
>> https://dist.apache.org/repos/dist/dev/phoenix/
>>
>>
>>
>>
>>
>> there is a new phoenix-4.7.0-HBase-1.1-client-spark.jar that is all you
>> need to have in spark classpath
>>
>>
>>
>>
>> *Pierre Lacave*
>> 171 Skellig House, Custom House, Lower Mayor street, Dublin 1, Ireland
>>
>> Phone :       +353879128708
>>
>>
>>
>> On Fri, Feb 5, 2016 at 9:28 PM, Benjamin Kim <bbuil...@gmail.com> wrote:
>>
>> Hi Pierre,
>>
>>
>>
>> When will I be able to download this version?
>>
>>
>>
>> Thanks,
>>
>> Ben
>>
>>
>>
>> On Friday, February 5, 2016, pierre lacave <pie...@lacave.me> wrote:
>>
>> This was addressed in Phoenix 4.7 (currently in RC)
>>
>> https://issues.apache.org/jira/browse/PHOENIX-2503
>>
>>
>>
>>
>>
>>
>>
>>
>> *Pierre Lacave*
>> 171 Skellig House, Custom House, Lower Mayor street, Dublin 1, Ireland
>>
>> Phone :       +353879128708
>>
>>
>>
>> On Fri, Feb 5, 2016 at 6:17 PM, Benjamin Kim <bbuil...@gmail.com> wrote:
>>
>> I cannot get this plugin to work in CDH 5.4.8 using Phoenix 4.5.2 and
>> Spark 1.6. When I try to launch spark-shell, I get:
>>
>>         java.lang.RuntimeException: java.lang.RuntimeException: Unable to
>> instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
>>
>> I continue on and run the example code. When I get tot the line below:
>>
>>         val df = sqlContext.load("org.apache.phoenix.spark", Map("table"
>> -> "TEST.MY_TEST", "zkUrl" -> "zookeeper1,zookeeper2,zookeeper3:2181")
>>
>> I get this error:
>>
>>         java.lang.NoSuchMethodError:
>> com.fasterxml.jackson.module.scala.deser.BigDecimalDeserializer$.handledType()Ljava/lang/Class;
>>
>> Can someone help?
>>
>> Thanks,
>> Ben
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> This message and the information contained herein is proprietary and
>> confidential and subject to the Amdocs policy statement, you may review at
>> http://www.amdocs.com/email_disclaimer.asp
>>
>>
>>
>>
>>
>>
>>
>
>

Reply via email to