[ https://issues.apache.org/jira/browse/AMBARI-17062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15320445#comment-15320445 ]
Andrew Onischuk commented on AMBARI-17062: ------------------------------------------ When hadoop-qa ran there was problem with imports(not related to patch). Currently patch does not introduce new test failures, but tests failed due to other patches. {noformat} [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Ambari Main ....................................... SUCCESS [4.345s] [INFO] Apache Ambari Project POM ......................... SUCCESS [0.087s] [INFO] Ambari Web ........................................ SUCCESS [25.179s] [INFO] Ambari Views ...................................... SUCCESS [1.111s] [INFO] Ambari Admin View ................................. SUCCESS [8.241s] [INFO] ambari-metrics .................................... SUCCESS [0.287s] [INFO] Ambari Metrics Common ............................. SUCCESS [0.566s] [INFO] Ambari Server ..................................... SUCCESS [2:23.097s] [INFO] Ambari Agent ...................................... SUCCESS [44.156s] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3:47.881s [INFO] Finished at: Tue Jun 07 13:10:03 EEST 2016 [INFO] Final Memory: 97M/2876M [INFO] ------------------------------------------------------------------------ {noformat} > Namenode failed to start while installing a cluster from UI > ----------------------------------------------------------- > > Key: AMBARI-17062 > URL: https://issues.apache.org/jira/browse/AMBARI-17062 > Project: Ambari > Issue Type: Bug > Reporter: Andrew Onischuk > Assignee: Andrew Onischuk > Fix For: 2.4.0 > > Attachments: AMBARI-17062.patch > > > STR: Install a cluster from UI with HDFS, ZK and YARN > Namenode fails to start > The cluster is live here: <http://172.22.124.223:8080/> > > > > Traceback (most recent call last): > File > "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", > line 414, in <module> > NameNode().execute() > File > "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", > line 257, in execute > method(env) > File > "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", > line 101, in start > upgrade_suspended=params.upgrade_suspended, env=env) > File > "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, > in thunk > return fn(*args, **kwargs) > File > "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", > line 155, in namenode > create_log_dir=True > File > "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", > line 269, in service > Execute(daemon_cmd, not_if=process_id_exists_command, > environment=hadoop_env_exports) > File > "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line > 155, in __init__ > self.env.run() > File > "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", > line 160, in run > self.run_action(resource, action) > File > "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", > line 124, in run_action > provider_action() > File > "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", > line 273, in action_run > tries=self.resource.tries, try_sleep=self.resource.try_sleep) > File > "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line > 70, in inner > result = function(command, **kwargs) > File > "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line > 92, in checked_call > tries=tries, try_sleep=try_sleep) > File > "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line > 140, in _call_wrapper > result = _call(command, **kwargs_copy) > File > "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line > 293, in _call > raise Fail(err_msg) > resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su > hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; > /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config > /usr/hdp/current/hadoop-client/conf start namenode'' returned 1. ######## > Hortonworks ############# > This is MOTD message, added for testing in qe infra > starting namenode, logging to > /grid/0/log/hdfs/hdfs/hadoop-hdfs-namenode-os-r7-vsrfou-ambari-serv-11r-1.out > log4j:ERROR setFile(null,true) call failed. > java.io.FileNotFoundException: ./nm-audit.log (Permission denied) > at java.io.FileOutputStream.open(Native Method) > at java.io.FileOutputStream.<init>(FileOutputStream.java:221) > at java.io.FileOutputStream.<init>(FileOutputStream.java:142) > at org.apache.log4j.FileAppender.setFile(FileAppender.java:294) > at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165) > at > org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollingFileAppender.java:223) > at > org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307) > at > org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172) > -- This message was sent by Atlassian JIRA (v6.3.4#6332)