Agreed it's not an ASF requirement.  Just stating my opinion and sense from 
others.

An additional motivation and benefit of the Apache 20-append repository and 
release is a de facto record of what patches are actually required to achieve 
the requirement you just laid out (works well with HBase and no data loss).  
CDH goes beyond that.

What I'd recommend to someone today for production is a different story.  There 
has not been sufficient testing on the 20-append branch at this point so I 
would personally recommend CDH3b2 to most users.

JG

> -----Original Message-----
> From: Ryan Rawson [mailto:[email protected]]
> Sent: Wednesday, December 22, 2010 3:09 AM
> To: [email protected]
> Cc: [email protected]
> Subject: Re: provide a 0.20-append tarball?
> 
> There is nothing in the ASF that requires us to depend solely on ASF releases.
> As long as we are good with the licensing guidelines.  But that is the legal
> situation, not the community situation which is of course more complex.
> 
> My stand on this is the one I have always taken.  I need a HBase that works,
> well, and with no data loss.
> 
> -ryan
> 
> On Wed, Dec 22, 2010 at 2:55 AM, Jonathan Gray <[email protected]> wrote:
> > For CDH specific compatibility issues beyond general stuff like does it 
> > work,
> I think users should be pointed to Cloudera for support.
> >
> > For security issues, they probably won't find much help here so would
> need to go to Cloudera or some stuff might be answerable on the HDFS lists
> since security is in Apache 0.22 branch.
> >
> > We also want to support HDFS 0.22 at some point so being aware of
> potential issues is at least somewhat relevant for the community at large
> even if not using CDH.
> >
> >
> > The line that I believe should be drawn, and I think most have agreed, is
> that we should be compatible with and ship with an official Apache Hadoop
> release.  Having something Apache to ship with was one of the motivations
> of the 20-append branch.
> >
> > So yeah, the plan is to do an official 20-append Apache release.  Some
> > effort needs to go into it, so when is an open question.  Don't know
> > if there's a release manager yet, I think Dhruba might have stated so
> > when we made the branch but don't exactly recall.  However there is
> > one very common theme in the 20-append open JIRAs :)
> >
> > https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=true&m
> > ode=hide&sorter/order=DESC&sorter/field=priority&resolution=-
> 1&pid=123
> > 10942&fixfor=12315103
> >
> > There are currently 15 open issues listed.  The blocker, at least, is 
> > already
> committed to 20-append and is just open because it's not in trunk yet, so the
> number may be off a bit.
> >
> > JG
> >
> >> -----Original Message-----
> >> From: Andrew Purtell [mailto:[email protected]]
> >> Sent: Tuesday, December 21, 2010 6:45 PM
> >> To: [email protected]
> >> Cc: [email protected]
> >> Subject: provide a 0.20-append tarball?
> >>
> >> The latest CDH3 beta includes security changes that currently HBase
> >> 0.90 and trunk don't incorporate. Of course we can help out with
> >> clear HBase issues, but for security exceptions or similar, what about 
> >> that?
> Do we draw a line?
> >> Where?
> >>
> >> I've looked over the CDH3B3 installation documentation but have not
> >> installed it nor do presently use it.
> >>
> >> If we draw a line, then as an ASF community we should have a fallback
> >> option somewhere in ASF-land for the user to try. Vanilla Hadoop is
> >> not sufficient for HBase. Therefore, I propose we make a Hadoop
> >> 0.20-append tarball available.
> >>
> >> Best regards,
> >>
> >>     - Andy
> >>
> >> Problems worthy of attack prove their worth by hitting back.
> >>   - Piet Hein (via Tom White)
> >>
> >>
> >>
> >

Reply via email to