-1. I agree with Todd; we tried this policy before and the project didn't produce a usable release for two years. Its benefits are fiction and its harm is documented.
However 0.22 is (or isn't) released, no general policy is required and nobody should waste their time trying to define one. Releases- including versions- are by majority vote. Either the developers of the 0.22 series convince most of the PMC that the release series warrants a major version, they elect to continue development on the 0.22 series, or they fork the code and create a new project. Those are always the only outcomes and the reasoning will be ad hoc by definition. My opinion: version numbers are cheap. As long as 0.22 has contributors interested in pursuing that line of development, reserving a series for that work to be released is not unreasonable. Confining it to 0.22.xxx presumes it will fail, while a major version should give its maintainers sufficient flexibility to define compatibility, etc. -C On Mon, Mar 19, 2012 at 2:56 PM, Doug Cutting <[email protected]> wrote: > On 03/19/2012 02:47 PM, Arun C Murthy wrote: >> This is against the Apache Hadoop release policy on major releases i.e. only >> features deprecated for at least one release can be removed. > > In many case the reason this happened was that features were backported > from trunk to 0.20 but not to 0.22. In other words, its no fault of the > folks who were working on branch 0.22. So a related policy we might add > to prevent such situations in the future might be that if you backport > something from branch n to n-2 then you ought to also be required to > backport it to branch n-1 and in general to all intervening branches. > Does that seem sensible? > > Doug
