[ 
https://issues.apache.org/jira/browse/HDFS-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16157192#comment-16157192
 ] 

Allen Wittenauer edited comment on HDFS-11096 at 9/7/17 4:42 PM:
-----------------------------------------------------------------

{code}
set -e
{code}

I'm really not a fan of using set -e unless one absolutely must.  Using it 
eliminates any possible use of failure mechanisms, including in if tests. There 
are a lot of caveats when it is in play.  

{code}
set -x
{code}

Is this just temporary?

{code} 
for hostname in ${HOSTNAMES[@]}; do
    ssh -i ${ID_FILE} root@${hostname} ". /tmp/env.sh
{code}

It seems there are a few functions like this that have implementations in 
hadoop-functions.sh.  Shouldn't this just leverage that code?  [See also 
HADOOP-14009 .]  The bash settings in place (see above) will be an issue though.

{code}
  cd ${HADOOP_3}
  sbin/hadoop-daemon.sh start namenode -rollingUpgrade started
{code}

If it's hadoop 3.x, shouldn't this be using non-deprecated commands?

{code}
   sudo apt-get install -y git
{code}

This is kind of an interesting one. If I'm using this code, then I'm either 
already in a git repo or I've got a source tarball.  Given that the git hash is 
encoded at build time, I think there might be an implicit requirement that git 
is already installed.  In the case of some of the other Ubuntu-isms 
(apt-install of wget), there are likely generic ways to deal with them. (e.g., 
use the installed perl/python/java).  If the intent is to just use the docker 
images that ship with Hadoop, git is pretty much a requirement for Apache 
Yetus....

{code}
# Tested on an Ubuntu 16.04 host
{code}

Probably worth mentioning HADOOP-14816 upgrades the Dockerfile to Xenial.

{code}
    mvn clean package -DskipTests -Pdist -Dtar
{code}

Shouldn't this just call create-release --docker --native so that we get 
something closer to what we ship?

{code}
  HDFS_NAMENODE_USER=root \
  HDFS_DATANODE_USER=root \
  HDFS_JOURNALNODE_USER=root \
  HDFS_ZKFC_USER=root \
{code}

*dances with glee that someone else is using this feature*





was (Author: aw):
{code}
set -e
{code}

I'm really not a fan of using set -e unless one absolutely must.  Using it 
eliminates any possible use of failure mechanisms, including in if tests. There 
are a lot of caveats when it is in play.  

{code}
set -x
{code}

Is this just temporary?

{code} 
for hostname in ${HOSTNAMES[@]}; do
    ssh -i ${ID_FILE} root@${hostname} ". /tmp/env.sh
{code}

It seems there are a few functions like this that have implementations in 
hadoop-functions.sh.  Shouldn't this just leverage that code?  [See also 
HADOOP-14009 .]  The bash settings in place (see above) will be an issue though.

{code}
  cd ${HADOOP_3}
  sbin/hadoop-daemon.sh start namenode -rollingUpgrade started
{code}

If it's hadoop 3.x, shouldn't this be using non-deprecated commands?

{code}
   sudo apt-get install -y git
{code}

This is kind of an interesting one. If I'm using this code, then I'm either 
already in a git repo or I've got a source tarball.  Given that the git hash is 
encoded at build time, I think there might be an implicit requirement that git 
is already installed.  In the case of some of the other Ubuntu-isms 
(apt-install of wget), there are likely generic ways to deal with them. (e.g., 
use the installed perl/python/java).  If the intent is to just use the docker 
images that ship with Hadoop, git is pretty much a requirement for Apache 
Yetus....

{code}
# Tested on an Ubuntu 16.04 host
{code}

Probably worth mentioning HADOOP-14816 upgrades the Dockerfile to Xenial.

> Support rolling upgrade between 2.x and 3.x
> -------------------------------------------
>
>                 Key: HDFS-11096
>                 URL: https://issues.apache.org/jira/browse/HDFS-11096
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: rolling upgrades
>    Affects Versions: 3.0.0-alpha1
>            Reporter: Andrew Wang
>            Assignee: Sean Mackrory
>            Priority: Blocker
>         Attachments: HDFS-11096.001.patch, HDFS-11096.002.patch
>
>
> trunk has a minimum software version of 3.0.0-alpha1. This means we can't 
> rolling upgrade between branch-2 and trunk.
> This is a showstopper for large deployments. Unless there are very compelling 
> reasons to break compatibility, let's restore the ability to rolling upgrade 
> to 3.x releases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to