[
https://issues.apache.org/jira/browse/AMBARI-8490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14232175#comment-14232175
]
Jaimin D Jetly commented on AMBARI-8490:
----------------------------------------
[~screeley]
{quote}
I'm not sure what that file is really used for, there are no references to it
in the code (if I search on gluster-env)
{quote}
Ideally gluster-env file should be exposed on the UI for reconfiguration and it
should get created/updated on the machines if GLUSTERFS is
installed/reconfigured in a cluster. That will fix this issue
{quote}
I think the bigger question or real issue is, shouldn't the
/etc/hadoop/conf/hadoop-env.sh file get freshly written/updated on each cluster
install/deploy?
{quote}
hadoop-env.xml is a configuration file of HDFS service package in HDP-2.0.6
stack (parent stack to 2.1.GlusterFS stack). [link to file |
https://git-wip-us.apache.org/repos/asf/ambari/repo?p=ambari.git;a=blob;f=ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HDFS/configuration/hadoop-env.xml;h=1d6618d44b8ad4b06b06154bdfc43bdc03f4f742;hb=ff850fc06b23d8ab1b62ccd10265d15c7a27d465]
This subsequently translates in the HDP-2.0.6 stack shared initialization
scripts to create hadoop-env.sh file only if HDFS/NameNode is a part of the
cluster. [code pointer |
https://git-wip-us.apache.org/repos/asf/ambari/repo?p=ambari.git;a=blob;f=ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py;h=126b8bb98272bd2ab29dc2f4edf786cc599aea05;hb=ff850fc06b23d8ab1b62ccd10265d15c7a27d465#l97]
> Unconfigured env file /etc/hadoop/conf/hadoop-env.sh
> ----------------------------------------------------
>
> Key: AMBARI-8490
> URL: https://issues.apache.org/jira/browse/AMBARI-8490
> Project: Ambari
> Issue Type: Bug
> Affects Versions: 1.7.0
> Environment: RHEL 6, HDP_2.1.GlusterFS stack
> Reporter: Daniel Horak
> Labels: glusterfs, hcfs
>
> I've tryed to install HDP 2.1.GlusterFS on RHEL6 via ambari 1.7.0 and I'm not
> able to start any service, because of {{Error: JAVA_HOME is not set and could
> not be found.}}
> {noformat}
> 2014-11-28 14:08:56,663 - Error while executing command 'start':
> Traceback (most recent call last):
> File
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
> line 123, in execute
> method(env)
> File
> "/var/lib/ambari-agent/cache/stacks/HDP/2.1.GlusterFS/services/YARN/package/scripts/resourcemanager.py",
> line 46, in start
> action='start'
> File
> "/var/lib/ambari-agent/cache/stacks/HDP/2.1.GlusterFS/services/YARN/package/scripts/service.py",
> line 45, in service
> not_if=no_op
> File "/usr/lib/python2.6/site-packages/resource_management/core/base.py",
> line 148, in __init__
> self.env.run()
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
> line 149, in run
> self.run_action(resource, action)
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
> line 115, in run_action
> provider_action()
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
> line 241, in action_run
> raise ex
> Fail: Execution of 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec &&
> /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config /etc/hadoop/conf start
> resourcemanager' returned 1. Error: JAVA_HOME is not set and could not be
> found.{noformat}
> It is probably because of "unconfigured" hadoop-env.sh file in
> /etc/hadoop/conf/ (whole file is commented).
> {noformat}
> cat /etc/hadoop/conf/hadoop-env.sh
> # Copyright 2011 The Apache Software Foundation
> #
> # Licensed to the Apache Software Foundation (ASF) under one
> # or more contributor license agreements. See the NOTICE file
> # distributed with this work for additional information
> # regarding copyright ownership. The ASF licenses this file
> # to you under the Apache License, Version 2.0 (the
> # "License"); you may not use this file except in compliance
> # with the License. You may obtain a copy of the License at
> #
> # http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> # Set Hadoop-specific environment variables here.
> # The only required environment variable is JAVA_HOME. All others are
> # optional. When running a distributed configuration it is best to
> # set JAVA_HOME in this file, so that it is correctly defined on
> # remote nodes.
> # The java implementation to use.
> #export JAVA_HOME=${JAVA_HOME}
> # The jsvc implementation to use. Jsvc is required to run secure datanodes.
> #export JSVC_HOME=${JSVC_HOME}
> #export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}
> # Extra Java CLASSPATH elements. Automatically insert capacity-scheduler.
> #for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
> # if [ "$HADOOP_CLASSPATH" ]; then
> # export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
> # else
> # export HADOOP_CLASSPATH=$f
> # fi
> #done
> # The maximum amount of heap to use, in MB. Default is 1000.
> #export HADOOP_HEAPSIZE=
> #export HADOOP_NAMENODE_INIT_HEAPSIZE=""
> # Extra Java runtime options. Empty by default.
> #export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"
> # Command specific options appended to HADOOP_OPTS when specified
> #export
> HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS}
> -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender}
> $HADOOP_NAMENODE_OPTS"
> #export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS
> $HADOOP_DATANODE_OPTS"
> #export
> HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS}
> -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender}
> $HADOOP_SECONDARYNAMENODE_OPTS"
> #export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"
> #export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS"
> # The following applies to multiple commands (fs, dfs, fsck, distcp etc)
> #export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
> #HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData $HADOOP_JAVA_PLATFORM_OPTS"
> # On secure datanodes, user to run the datanode as after dropping privileges
> #export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}
> # Where log files are stored. $HADOOP_HOME/logs by default.
> #export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER
> # Where log files are stored in the secure data environment.
> #export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}
> # The directory where pid files are stored. /tmp by default.
> # NOTE: this should be set to a directory that can only be written to by
> # the user that will run the hadoop daemons. Otherwise there is the
> # potential for a symlink attack.
> #export HADOOP_PID_DIR=${HADOOP_PID_DIR}
> #export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}
> # A string representing this instance of hadoop. $USER by default.
> #export HADOOP_IDENT_STRING=$USER
> {noformat}
> {noformat}
> # rpm -qa ambari-*
> ambari-agent-1.7.0-168.x86_64
> ambari-server-1.7.0-168.noarch
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)