chad created BIGTOP-3466:
----------------------------
Summary: HDFS environmental variable not overriden if started with
'hdfs' command
Key: BIGTOP-3466
URL: https://issues.apache.org/jira/browse/BIGTOP-3466
Project: Bigtop
Issue Type: Bug
Components: hadoop, Init scripts
Affects Versions: 1.5.0
Environment: CentOS 7
Reporter: chad
Hi all, thanks for your hard work!
When upgrading to Bigtop 1.5.0 I followed the
[instructions|https://hadoop.apache.org/docs/r2.10.1/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html]
for a rolling upgrade of HDFS. These instructions have one start the namenode
daemon from the command line, such as this: '[hdfs dfsadmin -rollingUpgrade
started|https://hadoop.apache.org/docs/r2.10.1/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html#dfsadmin_-rollingUpgrade]'
This bypasses the addition of environmental variables which happens when the
namenode is started by the init script.
Specifically /etc/init.d/hadoop-hdfs overrides and adds environmental variables
here:
[ -n "${BIGTOP_DEFAULTS_DIR}" -a -r ${BIGTOP_DEFAULTS_DIR}/hadoop-hdfs-namenode
] && . ${BIGTOP_DEFAULTS_DIR}/hadoop-hdfs-namenode
But if the namenode is started by the above command that sourcing never
happens. (In our case the default Java heap is too small and the namenode fails
to start.)
Possibly the sourcing should occur in /usr/lib/hadoop-hdfs/bin/hdfs about here:
if [ "$COMMAND" = "namenode" ] ; then
CLASS='org.apache.hadoop.hdfs.server.namenode.NameNode'
#>>> -n [ "${BIGTOP_DEFAULTS_DIR}" -a -r
${BIGTOP_DEFAULTS_DIR}/hadoop-hdfs-namenode ] && .
${BIGTOP_DEFAULTS_DIR}/hadoop-hdfs-namenode
HADOOP_OPTS="$HADOOP_OPTS $HADOOP_NAMENODE_OPTS"
Have a good one!
C.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)