Hi,
Could you check the timeline server log located at
/var/log/hadoop-yarn/yarn/yarn-yarn-timelineserver*.log to see what problem
caused the failure?

On Mon, Nov 3, 2014 at 3:56 PM, guxiaobo1982 <[email protected]> wrote:

> The HDFS installed is of version
> Version:2.4.0.2.1.5.0-695, rc11220208321e1835912fde828f1038eedb1afae
>
>
> ------------------ Original ------------------
> *From: * "guxiaobo1982";<[email protected]>;
> *Send time:* Monday, Nov 3, 2014 3:48 PM
> *To:* "user"<[email protected]>;
> *Subject: * timeline service installed by ambari can't start
>
> Hi,
>
> I use Ambari 16.1 installed HDP 2.1 Single node deployment, but the
> timeline service can't start with the following error:
> stderr:   /var/lib/ambari-agent/data/errors-96.txt
>
> 2014-11-03 13:28:03,199 - Error while executing command 'restart':
> Traceback (most recent call last):
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 111, in execute
>     method(env)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 212, in restart
>     self.start(env)
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/application_timeline_server.py",
>  line 42, in start
>     service('historyserver', action='start')
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/service.py",
>  line 51, in service
>     initial_wait=5
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 148, in __init__
>     self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 149, in run
>     self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 115, in run_action
>     provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 239, in action_run
>     raise ex
> Fail: Execution of 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid 
> >/dev/null 2>&1 && ps `cat 
> /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid` >/dev/null 2>&1' 
> returned 1.
>
> stdout:   /var/lib/ambari-agent/data/output-96.txt
>
> 2014-11-03 13:27:56,524 - Execute['mkdir -p /tmp/HDP-artifacts/;     curl -kf 
> -x "" --retry 10     
> http://ambari.bh.com:8080/resources//UnlimitedJCEPolicyJDK7.zip -o 
> /tmp/HDP-artifacts//UnlimitedJCEPolicyJDK7.zip'] {'environment': ..., 
> 'not_if': 'test -e /tmp/HDP-artifacts//UnlimitedJCEPolicyJDK7.zip', 
> 'ignore_failures': True, 'path': ['/bin', '/usr/bin/']}
> 2014-11-03 13:27:56,543 - Skipping Execute['mkdir -p /tmp/HDP-artifacts/;     
> curl -kf -x "" --retry 10     
> http://ambari.bh.com:8080/resources//UnlimitedJCEPolicyJDK7.zip -o 
> /tmp/HDP-artifacts//UnlimitedJCEPolicyJDK7.zip'] due to not_if
> 2014-11-03 13:27:56,618 - Directory['/etc/hadoop/conf.empty'] {'owner': 
> 'root', 'group': 'root', 'recursive': True}
> 2014-11-03 13:27:56,620 - Link['/etc/hadoop/conf'] {'not_if': 'ls 
> /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}
> 2014-11-03 13:27:56,634 - Skipping Link['/etc/hadoop/conf'] due to not_if
> 2014-11-03 13:27:56,644 - File['/etc/hadoop/conf/hadoop-env.sh'] {'content': 
> Template('hadoop-env.sh.j2'), 'owner': 'hdfs'}
> 2014-11-03 13:27:56,646 - XmlConfig['core-site.xml'] {'owner': 'hdfs', 
> 'group': 'hadoop', 'conf_dir': '/etc/hadoop/conf', 'configurations': ...}
> 2014-11-03 13:27:56,650 - Generating config: /etc/hadoop/conf/core-site.xml
> 2014-11-03 13:27:56,650 - File['/etc/hadoop/conf/core-site.xml'] {'owner': 
> 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None}
> 2014-11-03 13:27:56,651 - Writing File['/etc/hadoop/conf/core-site.xml'] 
> because contents don't match
> 2014-11-03 13:27:56,662 - Execute['/bin/echo 0 > /selinux/enforce'] 
> {'only_if': 'test -f /selinux/enforce'}
> 2014-11-03 13:27:56,683 - Execute['mkdir -p 
> /usr/lib/hadoop/lib/native/Linux-i386-32; ln -sf /usr/lib/libsnappy.so 
> /usr/lib/hadoop/lib/native/Linux-i386-32/libsnappy.so'] {}
> 2014-11-03 13:27:56,698 - Execute['mkdir -p 
> /usr/lib/hadoop/lib/native/Linux-amd64-64; ln -sf /usr/lib64/libsnappy.so 
> /usr/lib/hadoop/lib/native/Linux-amd64-64/libsnappy.so'] {}
> 2014-11-03 13:27:56,709 - Directory['/var/log/hadoop'] {'owner': 'root', 
> 'group': 'root', 'recursive': True}
> 2014-11-03 13:27:56,710 - Directory['/var/run/hadoop'] {'owner': 'root', 
> 'group': 'root', 'recursive': True}
> 2014-11-03 13:27:56,710 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 
> 'recursive': True}
> 2014-11-03 13:27:56,714 - File['/etc/hadoop/conf/commons-logging.properties'] 
> {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
> 2014-11-03 13:27:56,716 - File['/etc/hadoop/conf/health_check'] {'content': 
> Template('health_check-v2.j2'), 'owner': 'hdfs'}
> 2014-11-03 13:27:56,717 - File['/etc/hadoop/conf/log4j.properties'] 
> {'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
> 2014-11-03 13:27:56,720 - File['/etc/hadoop/conf/hadoop-metrics2.properties'] 
> {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
> 2014-11-03 13:27:56,720 - File['/etc/hadoop/conf/task-log4j.properties'] 
> {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
> 2014-11-03 13:27:56,721 - File['/etc/hadoop/conf/configuration.xsl'] 
> {'owner': 'hdfs', 'group': 'hadoop'}
> 2014-11-03 13:27:56,803 - Execute['export 
> HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && 
> /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config /etc/hadoop/conf stop 
> historyserver'] {'user': 'yarn'}
> 2014-11-03 13:27:56,924 - Execute['rm -f 
> /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid'] {'user': 'yarn'}
> 2014-11-03 13:27:56,955 - Directory['/var/run/hadoop-yarn/yarn'] {'owner': 
> 'yarn', 'group': 'hadoop', 'recursive': True}
> 2014-11-03 13:27:56,956 - Directory['/var/log/hadoop-yarn/yarn'] {'owner': 
> 'yarn', 'group': 'hadoop', 'recursive': True}
> 2014-11-03 13:27:56,956 - Directory['/var/run/hadoop-mapreduce/mapred'] 
> {'owner': 'mapred', 'group': 'hadoop', 'recursive': True}
> 2014-11-03 13:27:56,956 - Directory['/var/log/hadoop-mapreduce/mapred'] 
> {'owner': 'mapred', 'group': 'hadoop', 'recursive': True}
> 2014-11-03 13:27:56,956 - Directory['/hadoop/yarn/local'] {'owner': 'yarn', 
> 'ignore_failures': True, 'recursive': True}
> 2014-11-03 13:27:56,956 - Directory['/hadoop/yarn/log'] {'owner': 'yarn', 
> 'ignore_failures': True, 'recursive': True}
> 2014-11-03 13:27:56,957 - Directory['/var/log/hadoop-yarn'] {'owner': 'yarn', 
> 'ignore_failures': True, 'recursive': True}
> 2014-11-03 13:27:56,957 - XmlConfig['core-site.xml'] {'owner': 'hdfs', 
> 'group': 'hadoop', 'mode': 0644, 'conf_dir': '/etc/hadoop/conf', 
> 'configurations': ...}
> 2014-11-03 13:27:56,963 - Generating config: /etc/hadoop/conf/core-site.xml
> 2014-11-03 13:27:56,963 - File['/etc/hadoop/conf/core-site.xml'] {'owner': 
> 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644}
> 2014-11-03 13:27:56,963 - XmlConfig['mapred-site.xml'] {'owner': 'yarn', 
> 'group': 'hadoop', 'mode': 0644, 'conf_dir': '/etc/hadoop/conf', 
> 'configurations': ...}
> 2014-11-03 13:27:56,966 - Generating config: /etc/hadoop/conf/mapred-site.xml
> 2014-11-03 13:27:56,966 - File['/etc/hadoop/conf/mapred-site.xml'] {'owner': 
> 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644}
> 2014-11-03 13:27:56,967 - Writing File['/etc/hadoop/conf/mapred-site.xml'] 
> because contents don't match
> 2014-11-03 13:27:56,967 - Changing owner for /etc/hadoop/conf/mapred-site.xml 
> from 1022 to yarn
> 2014-11-03 13:27:56,967 - XmlConfig['yarn-site.xml'] {'owner': 'yarn', 
> 'group': 'hadoop', 'mode': 0644, 'conf_dir': '/etc/hadoop/conf', 
> 'configurations': ...}
> 2014-11-03 13:27:56,969 - Generating config: /etc/hadoop/conf/yarn-site.xml
> 2014-11-03 13:27:56,969 - File['/etc/hadoop/conf/yarn-site.xml'] {'owner': 
> 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644}
> 2014-11-03 13:27:56,970 - Writing File['/etc/hadoop/conf/yarn-site.xml'] 
> because contents don't match
> 2014-11-03 13:27:56,971 - XmlConfig['capacity-scheduler.xml'] {'owner': 
> 'yarn', 'group': 'hadoop', 'mode': 0644, 'conf_dir': '/etc/hadoop/conf', 
> 'configurations': ...}
> 2014-11-03 13:27:56,974 - Generating config: 
> /etc/hadoop/conf/capacity-scheduler.xml
> 2014-11-03 13:27:56,974 - File['/etc/hadoop/conf/capacity-scheduler.xml'] 
> {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 
> 0644}
> 2014-11-03 13:27:56,975 - Writing 
> File['/etc/hadoop/conf/capacity-scheduler.xml'] because contents don't match
> 2014-11-03 13:27:56,975 - Changing owner for 
> /etc/hadoop/conf/capacity-scheduler.xml from 1021 to yarn
> 2014-11-03 13:27:56,975 - Directory['/hadoop/yarn/timeline'] {'owner': 
> 'yarn', 'group': 'hadoop', 'recursive': True}
> 2014-11-03 13:27:56,975 - File['/etc/hadoop/conf/yarn.exclude'] {'owner': 
> 'yarn', 'group': 'hadoop'}
> 2014-11-03 13:27:56,977 - File['/etc/security/limits.d/yarn.conf'] 
> {'content': Template('yarn.conf.j2'), 'mode': 0644}
> 2014-11-03 13:27:56,980 - File['/etc/security/limits.d/mapreduce.conf'] 
> {'content': Template('mapreduce.conf.j2'), 'mode': 0644}
> 2014-11-03 13:27:56,982 - File['/etc/hadoop/conf/yarn-env.sh'] {'content': 
> Template('yarn-env.sh.j2'), 'owner': 'yarn', 'group': 'hadoop', 'mode': 0755}
> 2014-11-03 13:27:56,984 - File['/etc/hadoop/conf/mapred-env.sh'] {'content': 
> Template('mapred-env.sh.j2'), 'owner': 'hdfs'}
> 2014-11-03 13:27:56,985 - File['/etc/hadoop/conf/taskcontroller.cfg'] 
> {'content': Template('taskcontroller.cfg.j2'), 'owner': 'hdfs'}
> 2014-11-03 13:27:56,986 - XmlConfig['mapred-site.xml'] {'owner': 'mapred', 
> 'group': 'hadoop', 'conf_dir': '/etc/hadoop/conf', 'configurations': ...}
> 2014-11-03 13:27:56,988 - Generating config: /etc/hadoop/conf/mapred-site.xml
> 2014-11-03 13:27:56,988 - File['/etc/hadoop/conf/mapred-site.xml'] {'owner': 
> 'mapred', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None}
> 2014-11-03 13:27:56,988 - Changing owner for /etc/hadoop/conf/mapred-site.xml 
> from 1020 to mapred
> 2014-11-03 13:27:56,988 - XmlConfig['capacity-scheduler.xml'] {'owner': 
> 'hdfs', 'group': 'hadoop', 'conf_dir': '/etc/hadoop/conf', 'configurations': 
> ...}
> 2014-11-03 13:27:56,991 - Generating config: 
> /etc/hadoop/conf/capacity-scheduler.xml
> 2014-11-03 13:27:56,991 - File['/etc/hadoop/conf/capacity-scheduler.xml'] 
> {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 
> None}
> 2014-11-03 13:27:56,992 - Changing owner for 
> /etc/hadoop/conf/capacity-scheduler.xml from 1020 to hdfs
> 2014-11-03 13:27:56,992 - File['/etc/hadoop/conf/ssl-client.xml.example'] 
> {'owner': 'mapred', 'group': 'hadoop'}
> 2014-11-03 13:27:56,992 - File['/etc/hadoop/conf/ssl-server.xml.example'] 
> {'owner': 'mapred', 'group': 'hadoop'}
> 2014-11-03 13:27:56,993 - Execute['export 
> HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && 
> /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config /etc/hadoop/conf start 
> historyserver'] {'not_if': 'ls 
> /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid >/dev/null 2>&1 && ps 
> `cat /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid` >/dev/null 2>&1', 
> 'user': 'yarn'}
> 2014-11-03 13:27:58,089 - Execute['ls 
> /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid >/dev/null 2>&1 && ps 
> `cat /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid` >/dev/null 2>&1'] 
> {'initial_wait': 5, 'not_if': 'ls 
> /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid >/dev/null 2>&1 && ps 
> `cat /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid` >/dev/null 2>&1', 
> 'user': 'yarn'}
> 2014-11-03 13:28:03,199 - Error while executing command 'restart':
> Traceback (most recent call last):
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 111, in execute
>     method(env)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 212, in restart
>     self.start(env)
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/application_timeline_server.py",
>  line 42, in start
>     service('historyserver', action='start')
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/service.py",
>  line 51, in service
>     initial_wait=5
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 148, in __init__
>     self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 149, in run
>     self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 115, in run_action
>     provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 239, in action_run
>     raise ex
> Fail: Execution of 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid 
> >/dev/null 2>&1 && ps `cat 
> /var/run/hadoop-yarn/yarn/yarn-yarn-historyserver.pid` >/dev/null 2>&1' 
> returned 1.
>
> It seems this is a known issue according to
> http://docs.hortonworks.com/HDPDocuments/Ambari-1.6.1.0/bk_releasenotes_ambari_1.6.1/content/ch_relnotes-ambari-1.6.1.0-knownissues.html
> ,
>
> I checked my environment, it is configured with the default value of 
> org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore
> for yarn.timeline-service.store-class, and I can't determine which
> version of HDP ambari-server has installed, so I tried 
> org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.LeveldbTimelineStore
> and org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore, but both
> failed with the same problem, can you help with this, and another questions
> is how can determine which version of HDP is installed?
>
> Thanks
>
>


-- 
Cheers
-MJ

Reply via email to