On 1/17/11 12:11 PM, "Doug Cutting" <[email protected]> wrote:

We would not release this until each change in it has been reviewed by
the community, right?  Otherwise we may end up with changes in a 0.20
release that don't get approved when they're contributed to trunk and
cause trunk to regress. So I don't yet see the point of committing the
mega patch since the community needs to review each individual change
anyway, so we might wait until each is reviewed to commit it.

My take is straight-forward:

Apache Hadoop hasn't had a stable, updated release in a while.

As a result there is too much confusion for the user-community. There are too many releases done by too many entities and nothing is available from Apache, for a long while now. This is a situation we need to rectify, urgently!

Engaging in community review of these patches will distract the developer community's attention from 0.22 and the future. Not to mention, it will take forever and keep users hanging. Yes, the mechanics are important - but not more important than the end result.

IAC:
a) The vast majority of these patches are already on jira, and have been for several, several months now. b) The vast majority of these patches have already been committed to trunk i.e. 0.22.

Sure, some patches maybe missing from 0.22 or jira; my proposal is not ideal and - I don't think anyone is pretending it is.

However, it does remedy the critical problem - a stable, updated Apache Hadoop release.

We can remedy backward or forward compatibility by being clever with our release versions or names.

An appeal: Let's use a bit of common sense and get the project moving forward with a release. Folks are welcome to put forward a append release and an append+security release and so forth (I've strongly supported that), not to mention 0.22 and beyond. IMHO, more than one release is definitely better than none.

Let's get the ball rolling, please!

thanks,
Arun

Reply via email to