On the '[VOTE] introduce Python as build-time and run-time dependency for
Hadoop and throughout Hadoop stack' thread the vote for using Maven plugins
for the build had an overwhelming acceptance (several +1s, a few 0s and no
-1s).

Current tasks of the build for being 'pluginized' are:

1* cmake (HADOOP-8887)
2* protoc(HADOOP-9117)
3* saveVersion (HADOOP-8924) (but this one has been -1)

I've tested #2 and #3 in Windows an the worked as expected. Regarding #1,
if cmake is supported (and used for hadoop) in Window, i believe it should
work with minimal changes.

As I mentioned in HADOOP-8924, "Writing Maven plugins is more complex that
writing scripts, I don't dispute this. The main motivation for using Maven
plugins is to keep things in the POM declarative and to hide (if necessary)
different handlings for different platforms."

In order to enable the use of custom plugins for the Hadoop build, we need
to have such plugins available in a Maven repo for Maven to download and
use them. Note that they cannot be part of the same build that is building
Hadoop itself.

While we could argue this plugins are general purpose and they should be
developed in a different standalone project, doing this would definitely
slow down when we can use them in Hadoop.

Because of this, I propose we create a Hadoop subproject 'hadoop-build' to
host custom Maven plugins (and custom Ant tasks for branch-1) for Hadoop.

Being a sub-project will allow us to do independent releases (from Hadoop
releases) of plugins  we need for Hadoop build. Plus, use them immediately
in the Hadoop build.

Eventually, if the Maven community picks up some of these plugins we could
simply remove them from our hadoop-build project and change Hadoop POMs to
use the new ones.

Looking forward to hear what others think.

Thanks.

-- 
Alejandro

Reply via email to