[
https://issues.apache.org/jira/browse/AMBARI-8525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14235592#comment-14235592
]
Hudson commented on AMBARI-8525:
--------------------------------
SUCCESS: Integrated in Ambari-trunk-Commit #1110 (See
[https://builds.apache.org/job/Ambari-trunk-Commit/1110/])
AMBARI-8525. NodeManager start failed on a host that was added (mkdir: No
FileSystem for scheme: hdfs) (aonishuk) (aonishuk:
http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=0fb990959f598ec04bf0c4cded9611038cc08622)
* ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/metainfo.xml
* ambari-server/src/main/resources/stacks/HDP/2.1/services/YARN/metainfo.xml
* ambari-server/src/main/resources/stacks/HDP/2.2/services/YARN/metainfo.xml
> NodeManager start failed on a host that was added (mkdir: No FileSystem for
> scheme: hdfs)
> -----------------------------------------------------------------------------------------
>
> Key: AMBARI-8525
> URL: https://issues.apache.org/jira/browse/AMBARI-8525
> Project: Ambari
> Issue Type: Bug
> Reporter: Andrew Onischuk
> Assignee: Andrew Onischuk
> Fix For: 2.0.0
>
>
> Created a 3-node cluster (HDFS, YARN, MR2, TEZ, GANGLIA, NAGIOS, ZOOKEEPER).
> Added a 4th host through Add Hosts Wizard.
> Selected NodeManager only in Assign Slaves page.
> NodeManager Install went fine, but Start failed:
>
>
>
> stderr:
> 2014-06-25 02:03:54,423 - Error while executing command 'start':
> Traceback (most recent call last):
> File
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
> line 111, in execute
> method(env)
> File
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/nodemanager.py",
> line 40, in start
> self.configure(env) # FOR SECURITY
> File
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/nodemanager.py",
> line 35, in configure
> yarn(name="nodemanager")
> File
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/yarn.py",
> line 61, in yarn
> params.HdfsDirectory(None, action="create")
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line
> 148, in __init__
> self.env.run()
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
> line 149, in run
> self.run_action(resource, action)
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
> line 115, in run_action
> provider_action()
> File
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_directory.py",
> line 104, in action_create
> not_if=format("su - {hdp_hdfs_user} -c 'hadoop fs -ls
> {dir_list_str}'")
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line
> 148, in __init__
> self.env.run()
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
> line 149, in run
> self.run_action(resource, action)
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
> line 115, in run_action
> provider_action()
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
> line 239, in action_run
> raise ex
> Fail: Execution of 'hadoop fs -mkdir `rpm -q hadoop | grep -q "hadoop-1"
> || echo "-p"` /app-logs /mapred /mapred/system /mr-history/tmp
> /mr-history/done && hadoop fs -chmod -R 777 /app-logs && hadoop fs -chmod
> 777 /mr-history/tmp && hadoop fs -chmod 1777 /mr-history/done && hadoop fs
> -chown mapred /mapred && hadoop fs -chown hdfs /mapred/system && hadoop fs
> -chown yarn:hadoop /app-logs && hadoop fs -chown mapred:hadoop
> /mr-history/tmp /mr-history/done' returned 1. mkdir: No FileSystem for
> scheme: hdfs
> mkdir: No FileSystem for scheme: hdfs
> mkdir: No FileSystem for scheme: hdfs
> mkdir: No FileSystem for scheme: hdfs
> mkdir: No FileSystem for scheme: hdfs
> stdout:
> 2014-06-25 02:03:50,084 - Directory['/etc/hadoop/conf.empty'] {'owner':
> 'root', 'group': 'root', 'recursive': True}
> 2014-06-25 02:03:50,088 - Link['/etc/hadoop/conf'] {'not_if': 'ls
> /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}
> 2014-06-25 02:03:50,113 - Skipping Link['/etc/hadoop/conf'] due to not_if
> 2014-06-25 02:03:50,129 - File['/etc/hadoop/conf/hadoop-env.sh']
> {'content': Template('hadoop-env.sh.j2'), 'owner': 'hdfs'}
> 2014-06-25 02:03:50,130 - XmlConfig['core-site.xml'] {'owner': 'hdfs',
> 'group': 'hadoop', 'conf_dir': '/etc/hadoop/conf', 'configurations': ...}
> 2014-06-25 02:03:50,135 - Generating config:
> /etc/hadoop/conf/core-site.xml
> 2014-06-25 02:03:50,135 - File['/etc/hadoop/conf/core-site.xml']
> {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode':
> None}
> 2014-06-25 02:03:50,136 - Writing File['/etc/hadoop/conf/core-site.xml']
> because contents don't match
> 2014-06-25 02:03:50,151 - Execute['/bin/echo 0 > /selinux/enforce']
> {'only_if': 'test -f /selinux/enforce'}
> 2014-06-25 02:03:50,162 - Skipping Execute['/bin/echo 0 >
> /selinux/enforce'] due to only_if
> 2014-06-25 02:03:50,164 - Execute['mkdir -p
> /usr/lib/hadoop/lib/native/Linux-i386-32; ln -sf /usr/lib/libsnappy.so
> /usr/lib/hadoop/lib/native/Linux-i386-32/libsnappy.so'] {}
> 2014-06-25 02:03:50,178 - Execute['mkdir -p
> /usr/lib/hadoop/lib/native/Linux-amd64-64; ln -sf /usr/lib64/libsnappy.so
> /usr/lib/hadoop/lib/native/Linux-amd64-64/libsnappy.so'] {}
> 2014-06-25 02:03:50,192 - Directory['/var/log/hadoop'] {'owner': 'root',
> 'group': 'root', 'recursive': True}
> 2014-06-25 02:03:50,193 - Directory['/var/run/hadoop'] {'owner': 'root',
> 'group': 'root', 'recursive': True}
> 2014-06-25 02:03:50,193 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs',
> 'recursive': True}
> 2014-06-25 02:03:50,198 -
> File['/etc/hadoop/conf/commons-logging.properties'] {'content':
> Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
> 2014-06-25 02:03:50,199 - File['/etc/hadoop/conf/health_check']
> {'content': Template('health_check-v2.j2'), 'owner': 'hdfs'}
> 2014-06-25 02:03:50,200 - File['/etc/hadoop/conf/log4j.properties']
> {'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
> 2014-06-25 02:03:50,204 -
> File['/etc/hadoop/conf/hadoop-metrics2.properties'] {'content':
> Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
> 2014-06-25 02:03:50,205 - File['/etc/hadoop/conf/task-log4j.properties']
> {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
> 2014-06-25 02:03:50,205 - File['/etc/hadoop/conf/configuration.xsl']
> {'owner': 'hdfs', 'group': 'hadoop'}
> 2014-06-25 02:03:50,300 - HdfsDirectory['/app-logs'] {'security_enabled':
> False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user':
> 'hdfs', 'kinit_path_local': '', 'recursive_chmod': True, 'owner': 'yarn',
> 'group': 'hadoop', 'action': ['create_delayed'], 'mode': 0777}
> 2014-06-25 02:03:50,301 - HdfsDirectory['/mapred'] {'security_enabled':
> False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user':
> 'hdfs', 'kinit_path_local': '', 'owner': 'mapred', 'action':
> ['create_delayed']}
> 2014-06-25 02:03:50,301 - HdfsDirectory['/mapred/system']
> {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir':
> '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'owner':
> 'hdfs', 'action': ['create_delayed']}
> 2014-06-25 02:03:50,302 - HdfsDirectory['/mr-history/tmp']
> {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir':
> '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode':
> 0777, 'owner': 'mapred', 'group': 'hadoop', 'action': ['create_delayed']}
> 2014-06-25 02:03:50,302 - HdfsDirectory['/mr-history/done']
> {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir':
> '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode':
> 01777, 'owner': 'mapred', 'group': 'hadoop', 'action': ['create_delayed']}
> 2014-06-25 02:03:50,302 - HdfsDirectory['None'] {'security_enabled':
> False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user':
> 'hdfs', 'kinit_path_local': '', 'action': ['create']}
> 2014-06-25 02:03:50,304 - Execute['hadoop fs -mkdir `rpm -q hadoop | grep
> -q "hadoop-1" || echo "-p"` /app-logs /mapred /mapred/system /mr-history/tmp
> /mr-history/done && hadoop fs -chmod -R 777 /app-logs && hadoop fs -chmod
> 777 /mr-history/tmp && hadoop fs -chmod 1777 /mr-history/done && hadoop fs
> -chown mapred /mapred && hadoop fs -chown hdfs /mapred/system && hadoop fs
> -chown yarn:hadoop /app-logs && hadoop fs -chown mapred:hadoop
> /mr-history/tmp /mr-history/done'] {'not_if': "su - hdfs -c 'hadoop fs -ls
> /app-logs /mapred /mapred/system /mr-history/tmp /mr-history/done'", 'user':
> 'hdfs'}
> 2014-06-25 02:03:54,423 - Error while executing command 'start':
> Traceback (most recent call last):
> File
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
> line 111, in execute
> method(env)
> File
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/nodemanager.py",
> line 40, in start
> self.configure(env) # FOR SECURITY
> File
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/nodemanager.py",
> line 35, in configure
> yarn(name="nodemanager")
> File
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/yarn.py",
> line 61, in yarn
> params.HdfsDirectory(None, action="create")
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line
> 148, in __init__
> self.env.run()
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
> line 149, in run
> self.run_action(resource, action)
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
> line 115, in run_action
> provider_action()
> File
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_directory.py",
> line 104, in action_create
> not_if=format("su - {hdp_hdfs_user} -c 'hadoop fs -ls
> {dir_list_str}'")
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line
> 148, in __init__
> self.env.run()
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
> line 149, in run
> self.run_action(resource, action)
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
> line 115, in run_action
> provider_action()
> File
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
> line 239, in action_run
> raise ex
> Fail: Execution of 'hadoop fs -mkdir `rpm -q hadoop | grep -q "hadoop-1"
> || echo "-p"` /app-logs /mapred /mapred/system /mr-history/tmp
> /mr-history/done && hadoop fs -chmod -R 777 /app-logs && hadoop fs -chmod
> 777 /mr-history/tmp && hadoop fs -chmod 1777 /mr-history/done && hadoop fs
> -chown mapred /mapred && hadoop fs -chown hdfs /mapred/system && hadoop fs
> -chown yarn:hadoop /app-logs && hadoop fs -chown mapred:hadoop
> /mr-history/tmp /mr-history/done' returned 1. mkdir: No FileSystem for
> scheme: hdfs
> mkdir: No FileSystem for scheme: hdfs
> mkdir: No FileSystem for scheme: hdfs
> mkdir: No FileSystem for scheme: hdfs
> mkdir: No FileSystem for scheme: hdfs
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)