On Tue, Aug 28, 2012 at 7:33 PM, Mattmann, Chris A (388J) <chris.a.mattm...@jpl.nasa.gov> wrote: > [decided to minimize traffic and to simply put this in one thread] > > Hi Guys, > > See the recent discussion on these threads: > > YARN as its own Hadoop "sub project": http://s.apache.org/WW1 > Maintain a single committer list for the Hadoop project: > http://s.apache.org/Owx > > ...and just pay attention to the Hadoop project over the last 3-4 years. It's > operating > as a single project, that's masking separate communities that themselves are > really > separate ASF projects. > > At the ASF, this has been a problem area called "umbrella" projects and over > the years, > all I've seen from them is wasted bandwidth, artificial barriers and the > inventions of > new ways to perform process mongering and to reduce the fun in developing > software > at this fantastic foundation. > > I've talked about umbrella projects enough. We've diverted conversation > enough. > Enough people have tried to act like there is some technical mumbo jumbo that > is > preventing the eventual act of higher power that I myself hope comes should > these > discussions prove unfruitful through normal means. > > *these. are. separate. projects.* > *there.are.not.blocker.issues.from.spinning.out.these.projects.as.their.own.communities* > > In this email: http://s.apache.org/rSm > > And in the 2 subsequent follow ons in that thread, I've outlined a process > that I'll copy > through below for splitting these projects into their own TLPs: > > -----snip > Process: > > 0. [DISCUSS] thread for <TLP name> in which you talk about #1 and #2 below, > potentially draft resolution too. > > 1. Decide on an initial set of *PMC* members. I urge each new TLP to adopt > PMC==C. See reasons I've > already discussed. > > 2. Decide on a chair. Try not to VOTE for this explicitly, see if can be > discussed and consensus > can be reached (just a thought experiment). VOTE if necessary. > > 3. [VOTE] thread for <TLP name> > > 4. Create Project: > a. paste resolution from #0 to board@ or; > b. go to general@incubator and start new Incubator project. > > 5. infrastructure set up. > MLs moving; new UNIX groups; website setup; > SVN setup like this: > > svn copy -m "MR TLP." https://svn.apache.org/repos/asf/hadoop/ > https://svn.apache.org/repos/asf/<insert cool MR name>; or > svn copy -m "YARN TLP." https://svn.apache.org/repos/asf/hadoop/ > https://svn.apache.org/repos/asf/<insert cool YARN name>; or > svn copy -m "HDFS TLP." https://svn.apache.org/repos/asf/hadoop/ > https://svn.apache.org/repos/asf/<insert cool HDFS name> > > After all 3 have been created run: > > svn remove -m "Remove Hadoop umbrella TLP. Split into separate projects." > https://svn.apache.org/repos/asf/hadoop > > 6. (TLPs if 4a; Incubator podling if 4b;) proceed, collaborate, operate as > distinct communities, and try to solve the code duplication/dependency > issues from there. > > 7. If 4b; then graduate as TLP from Incubator. > > -----snip > > So that's my proposal.
+1 on the general idea of splitting the projects predicated on fixing the issues that made the last split so painful and resolving technicalities like dependencies, etc. Here's a perspective of a downstream producer of a distribution built on top of Hadoop: I firmly believe that at least with Hadoop 2.0 we've reached a point where HDFS and YARN/Mapreduce being standalone loosely coupled projects would make much more sense. The user community of Bigtop has expressed interest in being able to mix-n-match versions of MR and HDFS and I believe this to be a very valid (and achievable!) use case. It is less clear what to do with the Hadoop 1.X code line, but my perception so far has been that it is mainly in maintenance mode and thus could be dealt with as an exceptional case. I've heard some integration concerns on this thread and while I appreciate them, I still believe that individual projects shouldn't be burdened by them to the extent that they can maintain a reasonable compatibility of the APIs. It is my personal opinion that HDFS and YARN/Mapreduce of the Hadoop 2.0 are ready to do that. Bigtop is there to keep them honest, provided that folks are willing to help us with that mission. Thanks, Roman.