Hi , In Ambari 1.7.0, the pid file /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid was deprecated in favor of /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid to be consistent. You may want to run ps on the timeline server process, kill it, and delete both pid files above, then attempt to start it again.
Let me know if this works. Thanks, Alejandro From: guxiaobo1982 <[email protected]<mailto:[email protected]>> Reply-To: user <[email protected]<mailto:[email protected]>> Date: Monday, February 9, 2015 at 3:30 AM To: user <[email protected]<mailto:[email protected]>> Subject: App timeline server can't start Hi, In order to user Spark with Hive, I try to install the 2.1 version of HDP inside a single node using ambari, but failed to start the APP timeline server with the following errors. stderr: /var/lib/ambari-agent/data/errors-227.txt 2015-02-09 19:26:15,636 - Error while executing command 'start': Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 123, in execute method(env) File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/application_timeline_server.py", line 42, in start service('timelineserver', action='start') File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/service.py", line 59, in service initial_wait=5 File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 149, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 115, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 241, in action_run raise ex Fail: Execution of 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid` >/dev/null 2>&1' returned 1. stdout: /var/lib/ambari-agent/data/output-227.txt 2015-02-09 19:26:08,800 - Group['hadoop'] {'ignore_failures': False} 2015-02-09 19:26:08,801 - Modifying group hadoop 2015-02-09 19:26:08,838 - Group['nobody'] {'ignore_failures': False} 2015-02-09 19:26:08,838 - Modifying group nobody 2015-02-09 19:26:08,863 - Group['users'] {'ignore_failures': False} 2015-02-09 19:26:08,863 - Modifying group users 2015-02-09 19:26:08,888 - Group['nagios'] {'ignore_failures': False} 2015-02-09 19:26:08,889 - Modifying group nagios 2015-02-09 19:26:08,915 - User['hive'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']} 2015-02-09 19:26:08,915 - Modifying user hive 2015-02-09 19:26:08,964 - User['oozie'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']} 2015-02-09 19:26:08,966 - Modifying user oozie 2015-02-09 19:26:08,984 - User['nobody'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'nobody']} 2015-02-09 19:26:08,985 - Modifying user nobody 2015-02-09 19:26:08,998 - User['nagios'] {'gid': 'nagios', 'ignore_failures': False, 'groups': [u'hadoop']} 2015-02-09 19:26:08,999 - Modifying user nagios 2015-02-09 19:26:09,011 - User['ambari-qa'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']} 2015-02-09 19:26:09,011 - Modifying user ambari-qa 2015-02-09 19:26:09,025 - User['flume'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']} 2015-02-09 19:26:09,026 - Modifying user flume 2015-02-09 19:26:09,041 - User['hdfs'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']} 2015-02-09 19:26:09,041 - Modifying user hdfs 2015-02-09 19:26:09,056 - User['storm'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']} 2015-02-09 19:26:09,056 - Modifying user storm 2015-02-09 19:26:09,071 - User['mapred'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']} 2015-02-09 19:26:09,071 - Modifying user mapred 2015-02-09 19:26:09,085 - User['hbase'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']} 2015-02-09 19:26:09,085 - Modifying user hbase 2015-02-09 19:26:09,099 - User['tez'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']} 2015-02-09 19:26:09,100 - Modifying user tez 2015-02-09 19:26:09,112 - User['zookeeper'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']} 2015-02-09 19:26:09,112 - Modifying user zookeeper 2015-02-09 19:26:09,126 - User['sqoop'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']} 2015-02-09 19:26:09,126 - Modifying user sqoop 2015-02-09 19:26:09,140 - User['yarn'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']} 2015-02-09 19:26:09,140 - Modifying user yarn 2015-02-09 19:26:09,155 - User['hcat'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']} 2015-02-09 19:26:09,156 - Modifying user hcat 2015-02-09 19:26:09,170 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2015-02-09 19:26:09,171 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] {'not_if': 'test $(id -u ambari-qa) -gt 1000'} 2015-02-09 19:26:09,185 - Skipping Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] due to not_if 2015-02-09 19:26:09,185 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2015-02-09 19:26:09,187 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/hadoop/hbase 2>/dev/null'] {'not_if': 'test $(id -u hbase) -gt 1000'} 2015-02-09 19:26:09,199 - Skipping Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/hadoop/hbase 2>/dev/null'] due to not_if 2015-02-09 19:26:09,200 - Directory['/etc/hadoop/conf.empty'] {'owner': 'root', 'group': 'root', 'recursive': True} 2015-02-09 19:26:09,200 - Link['/etc/hadoop/conf'] {'not_if': 'ls /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'} 2015-02-09 19:26:09,212 - Skipping Link['/etc/hadoop/conf'] due to not_if 2015-02-09 19:26:09,221 - File['/etc/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs'} 2015-02-09 19:26:09,234 - Execute['/bin/echo 0 > /selinux/enforce'] {'only_if': 'test -f /selinux/enforce'} 2015-02-09 19:26:09,261 - Directory['/var/log/hadoop'] {'owner': 'root', 'group': 'hadoop', 'mode': 0775, 'recursive': True} 2015-02-09 19:26:09,262 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True} 2015-02-09 19:26:09,262 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True} 2015-02-09 19:26:09,266 - File['/etc/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2015-02-09 19:26:09,267 - File['/etc/hadoop/conf/health_check'] {'content': Template('health_check-v2.j2'), 'owner': 'hdfs'} 2015-02-09 19:26:09,267 - File['/etc/hadoop/conf/log4j.properties'] {'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2015-02-09 19:26:09,271 - File['/etc/hadoop/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'} 2015-02-09 19:26:09,271 - File['/etc/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2015-02-09 19:26:09,271 - File['/etc/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'} 2015-02-09 19:26:09,370 - Directory['/var/run/hadoop-yarn/yarn'] {'owner': 'yarn', 'group': 'hadoop', 'recursive': True} 2015-02-09 19:26:09,371 - Directory['/var/log/hadoop-yarn/yarn'] {'owner': 'yarn', 'group': 'hadoop', 'recursive': True} 2015-02-09 19:26:09,371 - Directory['/var/run/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'group': 'hadoop', 'recursive': True} 2015-02-09 19:26:09,371 - Directory['/var/log/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'group': 'hadoop', 'recursive': True} 2015-02-09 19:26:09,371 - Directory['/var/log/hadoop-yarn'] {'owner': 'yarn', 'ignore_failures': True, 'recursive': True} 2015-02-09 19:26:09,372 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/etc/hadoop/conf', 'mode': 0644, 'configuration_attributes': ..., 'owner': 'hdfs', 'configurations': ...} 2015-02-09 19:26:09,379 - Generating config: /etc/hadoop/conf/core-site.xml 2015-02-09 19:26:09,379 - File['/etc/hadoop/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2015-02-09 19:26:09,380 - Writing File['/etc/hadoop/conf/core-site.xml'] because contents don't match 2015-02-09 19:26:09,380 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/etc/hadoop/conf', 'mode': 0644, 'configuration_attributes': ..., 'owner': 'yarn', 'configurations': ...} 2015-02-09 19:26:09,386 - Generating config: /etc/hadoop/conf/mapred-site.xml 2015-02-09 19:26:09,386 - File['/etc/hadoop/conf/mapred-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2015-02-09 19:26:09,386 - Writing File['/etc/hadoop/conf/mapred-site.xml'] because contents don't match 2015-02-09 19:26:09,387 - Changing owner for /etc/hadoop/conf/mapred-site.xml from 1035 to yarn 2015-02-09 19:26:09,387 - XmlConfig['yarn-site.xml'] {'group': 'hadoop', 'conf_dir': '/etc/hadoop/conf', 'mode': 0644, 'configuration_attributes': ..., 'owner': 'yarn', 'configurations': ...} 2015-02-09 19:26:09,392 - Generating config: /etc/hadoop/conf/yarn-site.xml 2015-02-09 19:26:09,393 - File['/etc/hadoop/conf/yarn-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2015-02-09 19:26:09,393 - Writing File['/etc/hadoop/conf/yarn-site.xml'] because contents don't match 2015-02-09 19:26:09,393 - XmlConfig['capacity-scheduler.xml'] {'group': 'hadoop', 'conf_dir': '/etc/hadoop/conf', 'mode': 0644, 'configuration_attributes': ..., 'owner': 'yarn', 'configurations': ...} 2015-02-09 19:26:09,399 - Generating config: /etc/hadoop/conf/capacity-scheduler.xml 2015-02-09 19:26:09,399 - File['/etc/hadoop/conf/capacity-scheduler.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2015-02-09 19:26:09,399 - Writing File['/etc/hadoop/conf/capacity-scheduler.xml'] because contents don't match 2015-02-09 19:26:09,400 - Changing owner for /etc/hadoop/conf/capacity-scheduler.xml from 1033 to yarn 2015-02-09 19:26:09,401 - Directory['/hadoop/yarn/timeline'] {'owner': 'yarn', 'group': 'hadoop', 'recursive': True} 2015-02-09 19:26:09,401 - File['/etc/hadoop/conf/yarn.exclude'] {'owner': 'yarn', 'group': 'hadoop'} 2015-02-09 19:26:09,403 - File['/etc/security/limits.d/yarn.conf'] {'content': Template('yarn.conf.j2'), 'mode': 0644} 2015-02-09 19:26:09,404 - File['/etc/security/limits.d/mapreduce.conf'] {'content': Template('mapreduce.conf.j2'), 'mode': 0644} 2015-02-09 19:26:09,407 - File['/etc/hadoop/conf/yarn-env.sh'] {'content': InlineTemplate(...), 'owner': 'yarn', 'group': 'hadoop', 'mode': 0755} 2015-02-09 19:26:09,408 - File['/etc/hadoop/conf/mapred-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs'} 2015-02-09 19:26:09,410 - File['/etc/hadoop/conf/taskcontroller.cfg'] {'content': Template('taskcontroller.cfg.j2'), 'owner': 'hdfs'} 2015-02-09 19:26:09,410 - XmlConfig['mapred-site.xml'] {'owner': 'mapred', 'group': 'hadoop', 'conf_dir': '/etc/hadoop/conf', 'configuration_attributes': ..., 'configurations': ...} 2015-02-09 19:26:09,415 - Generating config: /etc/hadoop/conf/mapred-site.xml 2015-02-09 19:26:09,415 - File['/etc/hadoop/conf/mapred-site.xml'] {'owner': 'mapred', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2015-02-09 19:26:09,416 - Changing owner for /etc/hadoop/conf/mapred-site.xml from 1040 to mapred 2015-02-09 19:26:09,416 - XmlConfig['capacity-scheduler.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/etc/hadoop/conf', 'configuration_attributes': ..., 'configurations': ...} 2015-02-09 19:26:09,421 - Generating config: /etc/hadoop/conf/capacity-scheduler.xml 2015-02-09 19:26:09,421 - File['/etc/hadoop/conf/capacity-scheduler.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2015-02-09 19:26:09,421 - Changing owner for /etc/hadoop/conf/capacity-scheduler.xml from 1040 to hdfs 2015-02-09 19:26:09,422 - File['/etc/hadoop/conf/ssl-client.xml.example'] {'owner': 'mapred', 'group': 'hadoop'} 2015-02-09 19:26:09,422 - File['/etc/hadoop/conf/ssl-server.xml.example'] {'owner': 'mapred', 'group': 'hadoop'} 2015-02-09 19:26:09,423 - File['/var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid'] {'action': ['delete'], 'not_if': 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid` >/dev/null 2>&1'} 2015-02-09 19:26:09,464 - Deleting File['/var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid'] 2015-02-09 19:26:09,464 - Execute['ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config /etc/hadoop/conf start timelineserver'] {'not_if': 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid` >/dev/null 2>&1', 'user': 'yarn'} 2015-02-09 19:26:10,556 - Execute['ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid` >/dev/null 2>&1'] {'initial_wait': 5, 'not_if': 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid` >/dev/null 2>&1', 'user': 'yarn'} 2015-02-09 19:26:15,636 - Error while executing command 'start': Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 123, in execute method(env) File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/application_timeline_server.py", line 42, in start service('timelineserver', action='start') File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/service.py", line 59, in service initial_wait=5 File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 149, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 115, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 241, in action_run raise ex Fail: Execution of 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid` >/dev/null 2>&1' returned 1.
