Hello Sumit,
Thanks for your reply.
I am using Hadoop 1.2.0 and ambari 1.4.4
datanode log file shows the following error:
STARTUP_MSG: version = 1.2.0.1.3.2.0-111
STARTUP_MSG: build = git://c64-s8/ on branch comanche-branch-1 -r 3e43bec958e627d53f02d2842f6fac24a93110a9; compiled by 'jenkins' on Mon Aug 19 18:34:32 PDT 2013
STARTUP_MSG: java = 1.6.0_31
************************************************************/
2014-02-26 23:34:12,010 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2014-02-26 23:34:12,051 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2014-02-26 23:34:12,052 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2014-02-26 23:34:12,052 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2014-02-26 23:34:12,137 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2014-02-26 23:34:12,436 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.FileNotFoundException: /hdfs/hadoop/hdfs/data/storage (Permission denied)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.isConversionNeeded(DataStorage.java:194)
at org.apache.hadoop.hdfs.server.common.Storage.checkConversionNeeded(Storage.java:689)
at org.apache.hadoop.hdfs.server.common.Storage.access$000(Storage.java:57)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:458)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:111)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:414)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
2014-02-26 23:34:12,438 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
Thanks,
Akshatha
------- Original Message -------
Sender : Sumit Mohanty<[email protected]>
Date : Feb 27, 2014 02:07 (GMT+09:00)
Title : Re: Error installing datanode and tasktracker
Hello All,
I am getting the following error while installing datanode and tasktracker. Any help/suggestion is appreciated.
err: /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Service[datanode]/Hdp::Exec[sleep 5; ls /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid` >/dev/null 2>&1]/Exec[sleep 5; ls /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid` >/dev/null 2>&1]/returns: change from notrun to 0 failed: sleep 5; ls /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid` >/dev/null 2>&1 returned 1 instead of one of [0] at /var/lib/ambari-agent/puppet/modules/hdp/manifests/init.pp:487
Thanks,
Akshata
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
|
|

