Oh, yes. There is a workarround for sure. It just needs to be added in the HBase classpath. Nothing complicated.
2014-09-26 16:28 GMT-04:00 Konstantin Boudnik <[email protected]>: > I let Andrew or anyone else with more Phoenix experience to comment on > this, but from my standpoint - if there's a configuration workaround I am > willing to document this in RELEASE NOTES and go with release. If the fix > requires changes in the packages - I guess we'll have to do it. > > -- > Take care, > Konstantin (Cos) Boudnik > 2CAC 8312 4870 D885 8616 6115 220F 6980 1F27 E622 > > Disclaimer: Opinions expressed in this email are those of the author, and > do not necessarily represent the views of any company the author might be > affiliated with at the moment of writing. > > On Fri, Sep 26, 2014 at 12:50 PM, Jean-Marc Spaggiari < > [email protected]> wrote: > >> One thing. Just build 0.8.0. Went weel, but when running Phoenix on top >> of HBase got this: >> java.io.IOException: No jar path specified for >> org.apache.phoenix.coprocessor.MetaDataRegionObserver >> (Many of those) >> >> I think we are still missing a jar somewhere in the claspath for it. >> >> JM >> >> 2014-09-26 15:11 GMT-04:00 Konstantin Boudnik <[email protected]>: >> >> So, what I hear is that the stack in a decent condition and we can do the >>> formal source code release then? >>> >>> -- >>> Take care, >>> Konstantin (Cos) Boudnik >>> 2CAC 8312 4870 D885 8616 6115 220F 6980 1F27 E622 >>> >>> Disclaimer: Opinions expressed in this email are those of the author, >>> and do not necessarily represent the views of any company the author might >>> be affiliated with at the moment of writing. >>> >>> On Fri, Sep 26, 2014 at 11:12 AM, Sean Mackrory <[email protected]> >>> wrote: >>> >>>> My time on it has been scattered so I don't have a very good record of >>>> everything I've verified, but I've been playing with the bits since they >>>> were first posted and haven't run across any issues. I've played with at >>>> least HBase, ZooKeeper, YARN / MR and HDFS. I believe I messed with >>>> Sqoop >>>> and Hive a bit too. >>>> >>>> On Wed, Sep 24, 2014 at 9:15 PM, Konstantin Boudnik <[email protected]> >>>> wrote: >>>> >>>> > Thanks for testing the release candidate folks! >>>> > >>>> > And sorry for the delays guys - this is officially the most crazy two >>>> > weeks in >>>> > my life (and still not completely over). I have done some testing >>>> using >>>> > 0.8.0 >>>> > bits - HDFS ops, YARN w/ MR, some standalone Spark - all looks good. >>>> > >>>> > I will try to update the Changes and cut the officially signed bits >>>> over >>>> > the >>>> > weekend. >>>> > >>>> > Cos >>>> > >>>> > On Wed, Sep 24, 2014 at 07:59PM, Roman Shaposhnik wrote: >>>> > > Thanks for keeping us up-to-date on your progress. I plan to do some >>>> > > additional testing over the weekend as well. >>>> > > >>>> > > Cos is also working on cutting formal release bits. >>>> > > >>>> > > Thanks, >>>> > > Roman. >>>> > > >>>> > > On Wed, Sep 24, 2014 at 5:50 PM, Jean-Marc Spaggiari >>>> > > <[email protected]> wrote: >>>> > > > Build 0.8 again today on CentOs without any issue. Will run >>>> > applications >>>> > > > testing tomorrow. >>>> > > > >>>> > > > 2014-09-20 20:38 GMT-04:00 Roman Shaposhnik <[email protected] >>>> >: >>>> > > >> >>>> > > >> First of all -- thanks a million for all the testing so far! >>>> > > >> >>>> > > >> Also, I seem to be able to use Mahout from RC0.8 just >>>> > > >> fine. Lets get to the bottom of this. >>>> > > >> >>>> > > >> Thanks, >>>> > > >> Roman. >>>> > > >> >>>> > > >> On Sat, Sep 20, 2014 at 5:57 AM, jay vyas < >>>> > [email protected]> >>>> > > >> wrote: >>>> > > >> > hold the phone on this............... >>>> > > >> > >>>> > > >> > I just spun up an rc08 instance, and it looks like some of the >>>> > mahout >>>> > > >> > jobs >>>> > > >> > actually failed .... because the mahout version is for hadoop >>>> 1x is >>>> > > >> > being >>>> > > >> > built (rather than 2x) . JIRA created. >>>> > > >> > >>>> > > >> > https://issues.apache.org/jira/browse/BIGTOP-1453 >>>> > > >> > >>>> > > > >>>> > > > >>>> > >>>> >>> >>> >> >
