The original release script and instructions broke the build up into 
three or so steps. When I rewrote it, I kept that same model. It’s probably 
time to re-think that.  In particular, it should probably be one big step that 
even does the maven deploy.  There’s really no harm in doing that given that 
there is still a manual step to release the deployed jars into the production 
area.

        We just need need to:

a) add an option to do deploy instead of just install.  if c-r is in asf mode, 
always activate deploy
b) pull the maven settings.xml file (and only the maven settings file… we don’t 
want the repo!) into the docker build environment
c) consolidate the mvn steps

        This has the added benefit of greatly speeding up the build by removing 
several passes.

        Probably not a small change, but I’d have to look at the code.  I’m on 
a plane tomorrow morning though.

Also:

>> 
>> Major
>> - The previously supported way of being able to use different tar-balls
>> for different sub-modules is completely broken - common and HDFS tar.gz are
>> completely empty.
>> 
> 
> Is this something people use? I figured that the sub-tarballs were a relic
> from the project split, and nowadays Hadoop is one project with one release
> tarball. I actually thought about getting rid of these extra tarballs since
> they add extra overhead to a full build.

        I’m guessing no one noticed the tar errors when running mvn -Pdist.  
Not sure when they started happening.

> >   - When did we stop putting CHANGES files into the source artifacts?
> 
> CHANGES files were removed by 
> https://issues.apache.org/jira/browse/HADOOP-11792

        To be a bit more specific about it, the maven assembly for source only 
includes things (more or less) that are part of the git repo.  When CHANGES.txt 
was removed from the source tree, it also went away from the tar ball.  This 
isn’t too much of an issue in practice though given the notes are put up on the 
web, part of the binary tar ball, and can be generated by following the 
directions in BUILDING.txt.  I don’t remember if Hadoop uploads them into the 
dist area, but if not probably should.

> - $HADOOP_YARN_HOME/sbin/yarn-daemon.sh start historyserver doesn't even 
> work. Not just deprecated in favor of timelineserver as was advertised.

        This works for me in trunk and the bash code doesn’t appear to have 
changed in a very long time.  Probably something local to your install.  (I do 
notice that the deprecation message says “starting” which is awkward when the 
stop command is given though.)  Also: is the deprecation message even true at 
this point?

>> - Cannot enable new UI in YARN because it is under a non-default
>> compilation flag. It should be on by default.
>> 
> 
> The yarn-ui profile has always been off by default, AFAIK. It's documented
> to turn it on in BUILDING.txt for release builds, and we do it in
> create-release.
> 
> IMO not a blocker. I think it's also more of a dev question (do we want to
> do this on every YARN build?) than a release one.

        -1 on making yarn-ui always build.

        For what is effectively an optional component (the old UI is still 
there), it’s heavy dependency requirements make it a special burden outside of 
the Docker container.  If it can be changed such that it either always 
downloads the necessary bits (regardless of the OS/chipset!) and/or doesn’t 
kill the maven build if those bits can’t be found  (i.e., truly optional), then 
I’d be less opposed.  (and, actually, quite pleased because then the docker 
image build would be significantly faster.)



---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org

Reply via email to