Andrew Onischuk created AMBARI-24072:
----------------------------------------
Summary: NN cannot start due do not have permission for creation
of the folde
Key: AMBARI-24072
URL: https://issues.apache.org/jira/browse/AMBARI-24072
Project: Ambari
Issue Type: Bug
Reporter: Andrew Onischuk
Assignee: Andrew Onischuk
Fix For: 2.7.0
Attachments: AMBARI-24072.patch
STR:
1) Install ambari cluster with custom user configuration via BP
Cluster: <http://172.27.14.154:8080>
Actual result: NN cannot start due do not have permission for creation of the
folder "/var/run/hadoop/cstm-hdfs"
Looks like some script changed permission for
[root@ctr-e138-1518143905142-357962-01-000006 ~]# ls -la /var/run/ | grep
"hadoop"
drwxr-xr-x 2 cstm-ams hadoop 4096 Jun 11 01:56
ambari-metrics-monitor
drwxr-xr-x 6 root root 4096 Jun 11 01:56 hadoop
NN Logs:
stderr:
Traceback (most recent call last):
File
"/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HDFS/package/scripts/namenode.py",
line 414, in
NameNode().execute()
File
"/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py",
line 353, in execute
method(env)
File
"/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HDFS/package/scripts/namenode.py",
line 138, in start
upgrade_suspended=params.upgrade_suspended, env=env)
File "/usr/lib/ambari-agent/lib/ambari_commons/os_family_impl.py", line
89, in thunk
return fn(*args, **kwargs)
File
"/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HDFS/package/scripts/hdfs_namenode.py",
line 115, in namenode
format_namenode()
File
"/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HDFS/package/scripts/hdfs_namenode.py",
line 369, in format_namenode
logoutput=True
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line
166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py",
line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py",
line 124, in run_action
provider_action()
File
"/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line
263, in action_run
returns=self.resource.returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line
72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line
102, in checked_call
tries=tries, try_sleep=try_sleep,
timeout_kill_strategy=timeout_kill_strategy, returns=returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line
150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line
314, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'hdfs
--config /usr/hdp/3.0.0.0-1469/hadoop/conf namenode -format -nonInteractive'
returned 1. ######## Hortonworks #############
This is MOTD message, added for testing in qe infra
WARNING: /var/run/hadoop/cstm-hdfs does not exist. Creating.
mkdir: cannot create directory ‘/var/run/hadoop/cstm-hdfs’: Permission
denied
ERROR: Unable to create /var/run/hadoop/cstm-hdfs. Aborting.
stdout:
2018-06-11 10:00:42,196 - Stack Feature Version Info: Cluster Stack=3.0,
Command Stack=None, Command Version=3.0.0.0-1469 -> 3.0.0.0-1469
2018-06-11 10:00:42,308 - Using hadoop conf dir:
/usr/hdp/3.0.0.0-1469/hadoop/conf
2018-06-11 10:00:43,323 - Stack Feature Version Info: Cluster Stack=3.0,
Command Stack=None, Command Version=3.0.0.0-1469 -> 3.0.0.0-1469
2018-06-11 10:00:43,353 - Using hadoop conf dir:
/usr/hdp/3.0.0.0-1469/hadoop/conf
2018-06-11 10:00:43,358 - Group['cstm-users'] {}
2018-06-11 10:00:43,364 - Group['cstm-ranger'] {}
2018-06-11 10:00:43,364 - Group['cstm-zeppelin'] {}
2018-06-11 10:00:43,365 - Group['hdfs'] {}
2018-06-11 10:00:43,365 - Group['cstm-livy'] {}
2018-06-11 10:00:43,365 - Group['hadoop'] {}
2018-06-11 10:00:43,366 - Group['cstm-knox'] {}
2018-06-11 10:00:43,366 - Group['cstm-spark'] {}
2018-06-11 10:00:43,368 - User['yarn-ats'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-06-11 10:00:43,371 - User['cstm-ranger'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['cstm-ranger', 'hadoop'], 'uid': None}
2018-06-11 10:00:43,373 - User['cstm-hive'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-06-11 10:00:43,376 - User['cstm-sqoop'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-06-11 10:00:43,378 - User['cstm-ams'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-06-11 10:00:43,381 - User['cstm-yarn'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-06-11 10:00:43,384 - User['cstm-tez'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['cstm-users', 'hadoop'], 'uid': None}
2018-06-11 10:00:43,386 - User['cstm-atlas'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-06-11 10:00:43,389 - User['cstm-storm'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-06-11 10:00:43,391 - User['cstm-knox'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'cstm-knox'], 'uid': None}
2018-06-11 10:00:43,394 - User['cstm-kafka'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-06-11 10:00:43,397 - User['cstm-logsearch'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-06-11 10:00:43,399 - User['cstm-infra-solr'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-06-11 10:00:43,402 - User['cstm-hbase'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-06-11 10:00:43,404 - User['cstm-hdfs'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2018-06-11 10:00:43,407 - User['cstm-mr'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-06-11 10:00:43,409 - User['ambari-qa'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['cstm-users', 'hadoop'], 'uid': None}
2018-06-11 10:00:43,412 - User['cstm-zeppelin'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['cstm-zeppelin', 'hadoop'], 'uid':
None}
2018-06-11 10:00:43,414 - User['cstm-zookeeper'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-06-11 10:00:43,417 - User['cstm-livy'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['cstm-livy', 'hadoop'], 'uid': None}
2018-06-11 10:00:43,419 - User['cstm-oozie'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['cstm-users', 'hadoop'], 'uid': None}
2018-06-11 10:00:43,422 - User['cstm-spark'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'cstm-spark'], 'uid': None}
2018-06-11 10:00:43,424 - File['/var/lib/ambari-agent/tmp/changeUid.sh']
{'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-06-11 10:00:43,549 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh
ambari-qa
/tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa
0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-06-11 10:00:43,558 - Skipping
Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa
/tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa
0'] due to not_if
2018-06-11 10:00:43,558 - Directory['/tmp/hbase-hbase'] {'owner':
'cstm-hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2018-06-11 10:00:43,696 - File['/var/lib/ambari-agent/tmp/changeUid.sh']
{'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-06-11 10:00:43,819 - File['/var/lib/ambari-agent/tmp/changeUid.sh']
{'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-06-11 10:00:43,942 - call['/var/lib/ambari-agent/tmp/changeUid.sh
cstm-hbase'] {}
2018-06-11 10:00:43,953 - call returned (0, '1817')
2018-06-11 10:00:43,954 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh
cstm-hbase
/home/cstm-hbase,/tmp/cstm-hbase,/usr/bin/cstm-hbase,/var/log/cstm-hbase,/tmp/hbase-hbase
1817'] {'not_if': '(test $(id -u cstm-hbase) -gt 1000) || (false)'}
2018-06-11 10:00:43,961 - Skipping
Execute['/var/lib/ambari-agent/tmp/changeUid.sh cstm-hbase
/home/cstm-hbase,/tmp/cstm-hbase,/usr/bin/cstm-hbase,/var/log/cstm-hbase,/tmp/hbase-hbase
1817'] due to not_if
2018-06-11 10:00:43,962 - Group['hdfs'] {}
2018-06-11 10:00:43,963 - User['cstm-hdfs'] {'fetch_nonlocal_groups': True,
'groups': ['hdfs', 'hadoop', u'hdfs']}
2018-06-11 10:00:43,964 - FS Type: HDFS
2018-06-11 10:00:43,964 - Directory['/etc/hadoop'] {'mode': 0755}
2018-06-11 10:00:44,026 -
File['/usr/hdp/3.0.0.0-1469/hadoop/conf/hadoop-env.sh'] {'content':
InlineTemplate(...), 'owner': 'root', 'group': 'hadoop'}
2018-06-11 10:00:44,116 -
Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner':
'cstm-hdfs', 'group': 'hadoop', 'mode': 01777}
2018-06-11 10:00:44,225 - Execute[('setenforce', '0')] {'not_if': '(! which
getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo':
True, 'only_if': 'test -f /selinux/enforce'}
2018-06-11 10:00:44,235 - Skipping Execute[('setenforce', '0')] due to
not_if
2018-06-11 10:00:44,236 - Directory['/grid/0/log/hdfs'] {'owner': 'root',
'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2018-06-11 10:00:44,436 - Directory['/var/run/hadoop'] {'owner': 'root',
'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2018-06-11 10:00:44,595 - Directory['/tmp/hadoop-cstm-hdfs'] {'owner':
'cstm-hdfs', 'create_parents': True, 'cd_access': 'a'}
2018-06-11 10:00:44,719 -
File['/usr/hdp/3.0.0.0-1469/hadoop/conf/commons-logging.properties']
{'content': Template('commons-logging.properties.j2'), 'owner': 'root'}
2018-06-11 10:00:44,807 -
File['/usr/hdp/3.0.0.0-1469/hadoop/conf/health_check'] {'content':
Template('health_check.j2'), 'owner': 'root'}
2018-06-11 10:00:44,897 -
File['/usr/hdp/3.0.0.0-1469/hadoop/conf/log4j.properties'] {'content':
InlineTemplate(...), 'owner': 'cstm-hdfs', 'group': 'hadoop', 'mode': 0644}
2018-06-11 10:00:45,020 -
File['/usr/hdp/3.0.0.0-1469/hadoop/conf/hadoop-metrics2.properties']
{'content': InlineTemplate(...), 'owner': 'cstm-hdfs', 'group': 'hadoop'}
2018-06-11 10:00:45,112 -
File['/usr/hdp/3.0.0.0-1469/hadoop/conf/task-log4j.properties'] {'content':
StaticFile('task-log4j.properties'), 'mode': 0755}
2018-06-11 10:00:45,233 -
File['/usr/hdp/3.0.0.0-1469/hadoop/conf/configuration.xsl'] {'owner':
'cstm-hdfs', 'group': 'hadoop'}
2018-06-11 10:00:45,306 - File['/etc/hadoop/conf/topology_mappings.data']
{'owner': 'cstm-hdfs', 'content': Template('topology_mappings.data.j2'),
'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2018-06-11 10:00:45,409 - File['/etc/hadoop/conf/topology_script.py']
{'content': StaticFile('topology_script.py'), 'only_if': 'test -d
/etc/hadoop/conf', 'mode': 0755}
2018-06-11 10:00:45,545 - Skipping unlimited key JCE policy check and setup
since the Java VM is not managed by Ambari
2018-06-11 10:00:46,488 - Using hadoop conf dir:
/usr/hdp/3.0.0.0-1469/hadoop/conf
2018-06-11 10:00:46,490 - Stack Feature Version Info: Cluster Stack=3.0,
Command Stack=None, Command Version=3.0.0.0-1469 -> 3.0.0.0-1469
2018-06-11 10:00:46,614 - Using hadoop conf dir:
/usr/hdp/3.0.0.0-1469/hadoop/conf
2018-06-11 10:00:46,658 - Directory['/etc/security/limits.d'] {'owner':
'root', 'create_parents': True, 'group': 'root'}
2018-06-11 10:00:46,721 - File['/etc/security/limits.d/hdfs.conf']
{'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode':
0644}
2018-06-11 10:00:46,836 -
File['/usr/hdp/3.0.0.0-1469/hadoop/conf/hdfs_dn_jaas.conf'] {'content':
Template('hdfs_dn_jaas.conf.j2'), 'owner': 'cstm-hdfs', 'group': 'hadoop'}
2018-06-11 10:00:46,925 -
File['/usr/hdp/3.0.0.0-1469/hadoop/conf/hdfs_nn_jaas.conf'] {'content':
Template('hdfs_nn_jaas.conf.j2'), 'owner': 'cstm-hdfs', 'group': 'hadoop'}
2018-06-11 10:00:47,011 - XmlConfig['hadoop-policy.xml'] {'owner':
'cstm-hdfs', 'group': 'hadoop', 'conf_dir':
'/usr/hdp/3.0.0.0-1469/hadoop/conf', 'configuration_attributes': {},
'configurations': ...}
2018-06-11 10:00:47,026 - Generating config:
/usr/hdp/3.0.0.0-1469/hadoop/conf/hadoop-policy.xml
2018-06-11 10:00:47,026 -
File['/usr/hdp/3.0.0.0-1469/hadoop/conf/hadoop-policy.xml'] {'owner':
'cstm-hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None,
'encoding': 'UTF-8'}
2018-06-11 10:00:47,118 - XmlConfig['ssl-client.xml'] {'owner':
'cstm-hdfs', 'group': 'hadoop', 'conf_dir':
'/usr/hdp/3.0.0.0-1469/hadoop/conf', 'configuration_attributes': {},
'configurations': ...}
2018-06-11 10:00:47,132 - Generating config:
/usr/hdp/3.0.0.0-1469/hadoop/conf/ssl-client.xml
2018-06-11 10:00:47,133 -
File['/usr/hdp/3.0.0.0-1469/hadoop/conf/ssl-client.xml'] {'owner': 'cstm-hdfs',
'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding':
'UTF-8'}
2018-06-11 10:00:47,234 -
Directory['/usr/hdp/3.0.0.0-1469/hadoop/conf/secure'] {'owner': 'root',
'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2018-06-11 10:00:47,489 - XmlConfig['ssl-client.xml'] {'owner':
'cstm-hdfs', 'group': 'hadoop', 'conf_dir':
'/usr/hdp/3.0.0.0-1469/hadoop/conf/secure', 'configuration_attributes': {},
'configurations': ...}
2018-06-11 10:00:47,503 - Generating config:
/usr/hdp/3.0.0.0-1469/hadoop/conf/secure/ssl-client.xml
2018-06-11 10:00:47,504 -
File['/usr/hdp/3.0.0.0-1469/hadoop/conf/secure/ssl-client.xml'] {'owner':
'cstm-hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None,
'encoding': 'UTF-8'}
2018-06-11 10:00:47,602 - XmlConfig['ssl-server.xml'] {'owner':
'cstm-hdfs', 'group': 'hadoop', 'conf_dir':
'/usr/hdp/3.0.0.0-1469/hadoop/conf', 'configuration_attributes': {},
'configurations': ...}
2018-06-11 10:00:47,615 - Generating config:
/usr/hdp/3.0.0.0-1469/hadoop/conf/ssl-server.xml
2018-06-11 10:00:47,615 -
File['/usr/hdp/3.0.0.0-1469/hadoop/conf/ssl-server.xml'] {'owner': 'cstm-hdfs',
'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding':
'UTF-8'}
2018-06-11 10:00:47,712 - XmlConfig['hdfs-site.xml'] {'owner': 'cstm-hdfs',
'group': 'hadoop', 'conf_dir': '/usr/hdp/3.0.0.0-1469/hadoop/conf',
'configuration_attributes': {u'final':
{u'dfs.datanode.failed.volumes.tolerated': u'true', u'dfs.datanode.data.dir':
u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir':
u'true', u'dfs.webhdfs.enabled': u'true'}}, 'configurations': ...}
2018-06-11 10:00:47,725 - Generating config:
/usr/hdp/3.0.0.0-1469/hadoop/conf/hdfs-site.xml
2018-06-11 10:00:47,725 -
File['/usr/hdp/3.0.0.0-1469/hadoop/conf/hdfs-site.xml'] {'owner': 'cstm-hdfs',
'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding':
'UTF-8'}
2018-06-11 10:00:47,889 - XmlConfig['core-site.xml'] {'group': 'hadoop',
'conf_dir': '/usr/hdp/3.0.0.0-1469/hadoop/conf', 'xml_include_file': None,
'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS':
u'true'}}, 'owner': 'cstm-hdfs', 'configurations': ...}
2018-06-11 10:00:47,906 - Generating config:
/usr/hdp/3.0.0.0-1469/hadoop/conf/core-site.xml
2018-06-11 10:00:47,906 -
File['/usr/hdp/3.0.0.0-1469/hadoop/conf/core-site.xml'] {'owner': 'cstm-hdfs',
'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding':
'UTF-8'}
2018-06-11 10:00:48,038 - Writing
File['/usr/hdp/3.0.0.0-1469/hadoop/conf/core-site.xml'] because contents don't
match
2018-06-11 10:00:48,093 - File['/usr/hdp/3.0.0.0-1469/hadoop/conf/slaves']
{'content': Template('slaves.j2'), 'owner': 'root'}
2018-06-11 10:00:48,181 - Repository['HDP-3.0-repo-1'] {'append_to_file':
False, 'base_url':
'http://s3.amazonaws.com/dev.hortonworks.com/HDP/centos7/3.x/BUILDS/3.0.0.0-1469',
'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template':
'[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list
%}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif
%}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1',
'mirror_list': None}
2018-06-11 10:00:48,214 - File['/etc/yum.repos.d/ambari-hdp-1.repo']
{'content':
'[HDP-3.0-repo-1]\nname=HDP-3.0-repo-1\nbaseurl=http://s3.amazonaws.com/dev.hortonworks.com/HDP/centos7/3.x/BUILDS/3.0.0.0-1469\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-06-11 10:00:48,287 - Writing
File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2018-06-11 10:00:48,304 - Repository['HDP-3.0-GPL-repo-1']
{'append_to_file': True, 'base_url':
'http://s3.amazonaws.com/dev.hortonworks.com/HDP-GPL/centos7/3.x/BUILDS/3.0.0.0-1469',
'action': ['create'], 'components': [u'HDP-GPL', 'main'], 'repo_template':
'[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list
%}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif
%}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1',
'mirror_list': None}
2018-06-11 10:00:48,329 - File['/etc/yum.repos.d/ambari-hdp-1.repo']
{'content':
'[HDP-3.0-repo-1]\nname=HDP-3.0-repo-1\nbaseurl=http://s3.amazonaws.com/dev.hortonworks.com/HDP/centos7/3.x/BUILDS/3.0.0.0-1469\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-3.0-GPL-repo-1]\nname=HDP-3.0-GPL-repo-1\nbaseurl=http://s3.amazonaws.com/dev.hortonworks.com/HDP-GPL/centos7/3.x/BUILDS/3.0.0.0-1469\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-06-11 10:00:48,397 - Writing
File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2018-06-11 10:00:48,417 - Repository['HDP-UTILS-1.1.0.22-repo-1']
{'append_to_file': True, 'base_url':
'http://s3.amazonaws.com/dev.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7',
'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template':
'[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list
%}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif
%}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1',
'mirror_list': None}
2018-06-11 10:00:48,446 - File['/etc/yum.repos.d/ambari-hdp-1.repo']
{'content':
'[HDP-3.0-repo-1]\nname=HDP-3.0-repo-1\nbaseurl=http://s3.amazonaws.com/dev.hortonworks.com/HDP/centos7/3.x/BUILDS/3.0.0.0-1469\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-3.0-GPL-repo-1]\nname=HDP-3.0-GPL-repo-1\nbaseurl=http://s3.amazonaws.com/dev.hortonworks.com/HDP-GPL/centos7/3.x/BUILDS/3.0.0.0-1469\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-1]\nname=HDP-UTILS-1.1.0.22-repo-1\nbaseurl=http://s3.amazonaws.com/dev.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-06-11 10:00:48,518 - Writing
File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2018-06-11 10:00:48,540 - Stack Feature Version Info: Cluster Stack=3.0,
Command Stack=None, Command Version=3.0.0.0-1469 -> 3.0.0.0-1469
2018-06-11 10:00:48,551 - Package['lzo'] {'retry_on_repo_unavailability':
False, 'retry_count': 5}
2018-06-11 10:00:49,196 - Skipping installation of existing package lzo
2018-06-11 10:00:49,196 - Package['hadooplzo_3_0_0_0_1469']
{'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-06-11 10:00:49,402 - Skipping installation of existing package
hadooplzo_3_0_0_0_1469
2018-06-11 10:00:49,402 - Package['hadooplzo_3_0_0_0_1469-native']
{'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-06-11 10:00:49,857 - Skipping installation of existing package
hadooplzo_3_0_0_0_1469-native
2018-06-11 10:00:49,861 - Directory['/grid/0/hadoop/hdfs/namenode']
{'owner': 'cstm-hdfs', 'group': 'hadoop', 'create_parents': True, 'mode': 0755,
'cd_access': 'a'}
2018-06-11 10:00:50,128 -
Directory['/usr/lib/ambari-logsearch-logfeeder/conf'] {'create_parents': True,
'mode': 0755, 'cd_access': 'a'}
2018-06-11 10:00:50,315 - Generate Log Feeder config file:
/usr/lib/ambari-logsearch-logfeeder/conf/input.config-hdfs.json
2018-06-11 10:00:50,315 -
File['/usr/lib/ambari-logsearch-logfeeder/conf/input.config-hdfs.json']
{'content': Template('input.config-hdfs.json.j2'), 'mode': 0644}
2018-06-11 10:00:50,410 - Skipping setting up secure ZNode ACL for HFDS as
it's supported only for NameNode HA mode.
2018-06-11 10:00:50,415 - Called service start with upgrade_type: None
2018-06-11 10:00:50,415 - HDFS: Setup ranger: command retry not enabled
thus skipping if ranger admin is down !
2018-06-11 10:00:50,417 - call['ambari-python-wrap /usr/bin/hdp-select
status hadoop-client'] {'timeout': 20}
2018-06-11 10:00:50,450 - call returned (0, 'hadoop-client - 3.0.0.0-1469')
2018-06-11 10:00:50,451 - RangeradminV2: Skip ranger admin if it's down !
2018-06-11 10:00:50,493 - checked_call['/usr/bin/kinit -c
/var/lib/ambari-agent/tmp/curl_krb_cache/ranger_admin_calls_cstm-hdfs_cc_dadca887d91334850d23f3a2088dac346b6b85f0706813c3f2212147
-kt /etc/security/keytabs/nn.service.keytab
nn/[email protected] > /dev/null']
{'user': 'cstm-hdfs'}
2018-06-11 10:00:50,611 - checked_call returned (0, '######## Hortonworks
#############\nThis is MOTD message, added for testing in qe infra')
2018-06-11 10:00:50,612 - call['ambari-sudo.sh su cstm-hdfs -l -s /bin/bash
-c 'curl --location-trusted -k --negotiate -u : -b
/var/lib/ambari-agent/tmp/cookies/b6b261de-4ab4-4c87-a271-bbaa9fc306f4 -c
/var/lib/ambari-agent/tmp/cookies/b6b261de-4ab4-4c87-a271-bbaa9fc306f4 -w
'"'"'%{http_code}'"'"'
http://ctr-e138-1518143905142-357962-01-000006.hwx.site:6080/login.jsp
--connect-timeout 10 --max-time 12 -o /dev/null 1>/tmp/tmpzBm4Ow
2>/tmp/tmprJvRDV''] {'quiet': False, 'env': {'KRB5CCNAME':
'/var/lib/ambari-agent/tmp/curl_krb_cache/ranger_admin_calls_cstm-hdfs_cc_dadca887d91334850d23f3a2088dac346b6b85f0706813c3f2212147'}}
2018-06-11 10:00:50,729 - call returned (0, '######## Hortonworks
#############\nThis is MOTD message, added for testing in qe infra')
2018-06-11 10:00:50,729 - get_user_call_output returned (0, u'200', u' %
Total % Received % Xferd Average Speed Time Time Time Current\n
Dload Upload Total Spent Left Speed\n\r
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:--
0\r100 3630 100 3630 0 0 1954k 0 --:--:-- --:--:-- --:--:--
3544k')
2018-06-11 10:00:50,731 - call['/usr/bin/klist -s
/var/lib/ambari-agent/tmp/curl_krb_cache/ranger_admin_calls_cstm-hdfs_cc_dadca887d91334850d23f3a2088dac346b6b85f0706813c3f2212147']
{'user': 'cstm-hdfs'}
2018-06-11 10:00:50,843 - call returned (0, '######## Hortonworks
#############\nThis is MOTD message, added for testing in qe infra')
2018-06-11 10:00:50,844 - call['ambari-sudo.sh su cstm-hdfs -l -s /bin/bash
-c 'curl --location-trusted -k --negotiate -u : -b
/var/lib/ambari-agent/tmp/cookies/49ac670c-d187-4b4c-8c13-32bc9f8ac060 -c
/var/lib/ambari-agent/tmp/cookies/49ac670c-d187-4b4c-8c13-32bc9f8ac060
'"'"'http://ctr-e138-1518143905142-357962-01-000006.hwx.site:6080/service/public/v2/api/service?serviceName=cl1_hadoop&serviceType=hdfs&isEnabled=true'"'"'
--connect-timeout 10 --max-time 12 -X GET 1>/tmp/tmpskeV7D 2>/tmp/tmpIUYUU7'']
{'quiet': False, 'env': {'KRB5CCNAME':
'/var/lib/ambari-agent/tmp/curl_krb_cache/ranger_admin_calls_cstm-hdfs_cc_dadca887d91334850d23f3a2088dac346b6b85f0706813c3f2212147'}}
2018-06-11 10:00:50,984 - call returned (0, '######## Hortonworks
#############\nThis is MOTD message, added for testing in qe infra')
2018-06-11 10:00:50,985 - get_user_call_output returned (0,
u'[{"id":2,"guid":"92620a51-bd3f-44f1-aed6-29a7c80809ec","isEnabled":true,"createdBy":"cstm-hdfs","updatedBy":"cstm-hdfs","createTime":1528682426000,"updateTime":1528682426000,"version":1,"type":"hdfs","name":"cl1_hadoop","description":"hdfs
repo","configs":{"commonNameForCertificate":"-","dfs.secondary.namenode.kerberos.principal":"nn/[email protected]","hadoop.security.authentication":"kerberos","hadoop.security.auth_to_local":"RULE:[1:$1@$0]([email protected])s/.*/ambari-qa/\\nRULE:[1:$1@$0]([email protected])s/.*/cstm-hbase/\\nRULE:[1:$1@$0]([email protected])s/.*/cstm-hdfs/\\nRULE:[1:$1@$0]([email protected])s/.*/cstm-spark/\\nRULE:[1:$1@$0]([email protected])s/.*/cstm-zeppelin/\\nRULE:[1:$1@$0]([email protected])s/.*/yarn-ats/\\nRULE:[1:$1@$0](.*@EXAMPLE.COM)s/@.*//\\nRULE:[2:$1@$0]([email protected])s/.*/activity_analyzer/\\nRULE:[2:$1@$0]([email protected])s/.*/activity_explorer/\\nRULE:[2:$1@$0]([email protected])s/.*/cstm-ams/\\nRULE:[2:$1@$0]([email protected])s/.*/cstm-ams/\\nRULE:[2:$1@$0]([email protected])s/.*/cstm-ams/\\nRULE:[2:$1@$0]([email protected])s/.*/cstm-atlas/\\nRULE:[2:$1@$0]([email protected])s/.*/yarn-ats/\\nRULE:[2:$1@$0]([email protected])s/.*/cstm-knox/\\nRULE:[2:$1@$0]([email protected])s/.*/cstm-livy/\\nRULE:[2:$1@$0]([email protected])s/.*/cstm-hdfs/\\nRULE:[2:$1@$0]([email protected])s/.*/cstm-hbase/\\nRULE:[2:$1@$0]([email protected])s/.*/cstm-hive/\\nRULE:[2:$1@$0]([email protected])s/.*/cstm-mr/\\nRULE:[2:$1@$0]([email protected])s/.*/cstm-hdfs/\\nRULE:[2:$1@$0]([email protected])s/.*/cstm-yarn/\\nRULE:[2:$1@$0]([email protected])s/.*/cstm-hdfs/\\nRULE:[2:$1@$0]([email protected])s/.*/cstm-oozie/\\nRULE:[2:$1@$0]([email protected])s/.*/cstm-ranger/\\nRULE:[2:$1@$0]([email protected])s/.*/rangertagsync/\\nRULE:[2:$1@$0]([email protected])s/.*/rangerusersync/\\nRULE:[2:$1@$0]([email protected])s/.*/cstm-yarn/\\nRULE:[2:$1@$0]([email protected])s/.*/cstm-yarn/\\nDEFAULT","dfs.datanode.kerberos.principal":"dn/[email protected]","tag.download.auth.users":"cstm-hdfs","password":"*****","policy.download.auth.users":"cstm-hdfs","hadoop.rpc.protection":"authentication","dfs.namenode.kerberos.principal":"nn/[email protected]","fs.default.name":"hdfs://ctr-e138-1518143905142-357962-01-000006.hwx.site:8020","hadoop.security.authorization":"true","username":"hadoop"},"policyVersion":3,"policyUpdateTime":1528682427000,"tagVersion":1,"tagUpdateTime":1528682426000}]',
u' % Total % Received % Xferd Average Speed Time Time Time
Current\n Dload Upload Total Spent Left
Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:--
--:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- --:--:--
--:--:-- 0\n\r 0 0 0 2603 0 0 92268 0 --:--:--
--:--:-- --:--:-- 92268')
2018-06-11 10:00:50,986 - Hdfs Repository cl1_hadoop exist
2018-06-11 10:00:50,989 -
File['/usr/hdp/3.0.0.0-1469/hadoop/conf/ranger-security.xml'] {'content':
InlineTemplate(...), 'owner': 'cstm-hdfs', 'group': 'hadoop', 'mode': 0644}
2018-06-11 10:00:51,064 - Writing
File['/usr/hdp/3.0.0.0-1469/hadoop/conf/ranger-security.xml'] because contents
don't match
2018-06-11 10:00:51,124 - Directory['/etc/ranger/cl1_hadoop'] {'owner':
'cstm-hdfs', 'create_parents': True, 'group': 'hadoop', 'mode': 0775,
'cd_access': 'a'}
2018-06-11 10:00:51,306 - Directory['/etc/ranger/cl1_hadoop/policycache']
{'owner': 'cstm-hdfs', 'group': 'hadoop', 'create_parents': True, 'mode': 0775,
'cd_access': 'a'}
2018-06-11 10:00:51,515 -
File['/etc/ranger/cl1_hadoop/policycache/hdfs_cl1_hadoop.json'] {'owner':
'cstm-hdfs', 'group': 'hadoop', 'mode': 0644}
2018-06-11 10:00:51,605 - XmlConfig['ranger-hdfs-audit.xml'] {'group':
'hadoop', 'conf_dir': '/usr/hdp/3.0.0.0-1469/hadoop/conf', 'mode': 0744,
'configuration_attributes': {}, 'owner': 'cstm-hdfs', 'configurations': ...}
2018-06-11 10:00:51,617 - Generating config:
/usr/hdp/3.0.0.0-1469/hadoop/conf/ranger-hdfs-audit.xml
2018-06-11 10:00:51,618 -
File['/usr/hdp/3.0.0.0-1469/hadoop/conf/ranger-hdfs-audit.xml'] {'owner':
'cstm-hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0744,
'encoding': 'UTF-8'}
2018-06-11 10:00:51,750 - XmlConfig['ranger-hdfs-security.xml'] {'group':
'hadoop', 'conf_dir': '/usr/hdp/3.0.0.0-1469/hadoop/conf', 'mode': 0744,
'configuration_attributes': {}, 'owner': 'cstm-hdfs', 'configurations': ...}
2018-06-11 10:00:51,767 - Generating config:
/usr/hdp/3.0.0.0-1469/hadoop/conf/ranger-hdfs-security.xml
2018-06-11 10:00:51,768 -
File['/usr/hdp/3.0.0.0-1469/hadoop/conf/ranger-hdfs-security.xml'] {'owner':
'cstm-hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0744,
'encoding': 'UTF-8'}
2018-06-11 10:00:51,892 - XmlConfig['ranger-policymgr-ssl.xml'] {'group':
'hadoop', 'conf_dir': '/usr/hdp/3.0.0.0-1469/hadoop/conf', 'mode': 0744,
'configuration_attributes': {}, 'owner': 'cstm-hdfs', 'configurations': ...}
2018-06-11 10:00:51,905 - Generating config:
/usr/hdp/3.0.0.0-1469/hadoop/conf/ranger-policymgr-ssl.xml
2018-06-11 10:00:51,905 -
File['/usr/hdp/3.0.0.0-1469/hadoop/conf/ranger-policymgr-ssl.xml'] {'owner':
'cstm-hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0744,
'encoding': 'UTF-8'}
2018-06-11 10:00:52,023 -
Execute[(u'/usr/hdp/3.0.0.0-1469/ranger-hdfs-plugin/ranger_credential_helper.py',
'-l', u'/usr/hdp/3.0.0.0-1469/ranger-hdfs-plugin/install/lib/*', '-f',
'/etc/ranger/cl1_hadoop/cred.jceks', '-k', 'sslKeyStore', '-v', [PROTECTED],
'-c', '1')] {'logoutput': True, 'environment': {'JAVA_HOME':
u'/usr/lib/jvm/java-openjdk'}, 'sudo': True}
Using Java:/usr/lib/jvm/java-openjdk/bin/java
Alias sslKeyStore created successfully!
2018-06-11 10:00:53,259 -
Execute[(u'/usr/hdp/3.0.0.0-1469/ranger-hdfs-plugin/ranger_credential_helper.py',
'-l', u'/usr/hdp/3.0.0.0-1469/ranger-hdfs-plugin/install/lib/*', '-f',
'/etc/ranger/cl1_hadoop/cred.jceks', '-k', 'sslTrustStore', '-v', [PROTECTED],
'-c', '1')] {'logoutput': True, 'environment': {'JAVA_HOME':
u'/usr/lib/jvm/java-openjdk'}, 'sudo': True}
Using Java:/usr/lib/jvm/java-openjdk/bin/java
Alias sslTrustStore created successfully!
2018-06-11 10:00:54,461 - File['/etc/ranger/cl1_hadoop/cred.jceks']
{'owner': 'cstm-hdfs', 'group': 'hadoop', 'mode': 0640}
2018-06-11 10:00:54,556 - File['/etc/ranger/cl1_hadoop/.cred.jceks.crc']
{'owner': 'cstm-hdfs', 'only_if': 'test -e
/etc/ranger/cl1_hadoop/.cred.jceks.crc', 'group': 'hadoop', 'mode': 0640}
2018-06-11 10:00:54,652 - File['/etc/hadoop/conf/dfs.exclude'] {'owner':
'cstm-hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'}
2018-06-11 10:00:54,749 - call[('ls', u'/grid/0/hadoop/hdfs/namenode')] {}
2018-06-11 10:00:54,756 - call returned (0, '')
2018-06-11 10:00:54,757 - Execute['ls /grid/0/hadoop/hdfs/namenode | wc -l
| grep -q ^0$'] {}
2018-06-11 10:00:54,765 - Execute['hdfs --config
/usr/hdp/3.0.0.0-1469/hadoop/conf namenode -format -nonInteractive']
{'logoutput': True, 'path': ['/usr/hdp/3.0.0.0-1469/hadoop/bin'], 'user':
'cstm-hdfs'}
######## Hortonworks #############
This is MOTD message, added for testing in qe infra
WARNING: /var/run/hadoop/cstm-hdfs does not exist. Creating.
mkdir: cannot create directory ‘/var/run/hadoop/cstm-hdfs’: Permission
denied
ERROR: Unable to create /var/run/hadoop/cstm-hdfs. Aborting.
Command failed after 1 tries
Artifacts:
http://testqelog.s3.amazonaws.com/qelogs/nat/107592/ambari-blueprints/split-6/nat-yc-r7-gfgs-ambari-blueprints-6/log_tree/index.html
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)