Hi,
Getting the following error now after few more changes:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = host20.uisp-hadoop.com/10.0.1.104
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.2.0.1.3.2.0-111
STARTUP_MSG: build = git://c64-s8/ on branch comanche-branch-1 -r 3e43bec958e627d53f02d2842f6fac24a93110a9; compiled by 'jenkins' on Mon Aug 19 18:34:32 PDT 2013
STARTUP_MSG: java = 1.6.0_31
************************************************************/
2014-02-27 16:34:38,127 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2014-02-27 16:34:38,168 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2014-02-27 16:34:38,169 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2014-02-27 16:34:38,169 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2014-02-27 16:34:38,254 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2014-02-27 16:34:38,554 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /hdfs/hadoop/hdfs/data: namenode namespaceID = 545079747; datanode namespaceID = 65935683
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:414)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
2014-02-27 16:34:38,554 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at host20.uisp-hadoop.com/10.0.1.104
Please let me know the fix.
Thanks,
Akshatha
------- Original Message -------
Sender : AKSHATHA SATHYANARAYAN<[email protected]> Engineer/Intelligence Platform Lab./Samsung Electronics
Date : Feb 27, 2014 09:35 (GMT+09:00)
Title : Re: Re: Re: Error installing datanode and tasktracker
The folder has write permissions for hdfs user. Its not a fresh install. I tried to clean the data node before installing this time. I guess I did not clean it properly.
Is there a document for gracefully cleaning a data/master node? It would be really useful to avoid these errors.
Thanks,
Akshatha
------- Original Message -------
Sender : Sumit Mohanty<[email protected]>
Date : Feb 27, 2014 09:26 (GMT+09:00)
Title : Re: Re: Error installing datanode and tasktracker
Hello Sumit,
Thanks for your reply.
I am using Hadoop 1.2.0 and ambari 1.4.4
datanode log file shows the following error:
STARTUP_MSG: version = 1.2.0.1.3.2.0-111
STARTUP_MSG: build = git://c64-s8/ on branch comanche-branch-1 -r 3e43bec958e627d53f02d2842f6fac24a93110a9; compiled by 'jenkins' on Mon Aug 19 18:34:32 PDT 2013
STARTUP_MSG: java = 1.6.0_31
************************************************************/
2014-02-26 23:34:12,010 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2014-02-26 23:34:12,051 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2014-02-26 23:34:12,052 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2014-02-26 23:34:12,052 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2014-02-26 23:34:12,137 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2014-02-26 23:34:12,436 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.FileNotFoundException: /hdfs/hadoop/hdfs/data/storage (Permission denied)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.isConversionNeeded(DataStorage.java:194)
at org.apache.hadoop.hdfs.server.common.Storage.checkConversionNeeded(Storage.java:689)
at org.apache.hadoop.hdfs.server.common.Storage.access$000(Storage.java:57)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:458)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:111)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:414)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)2014-02-26 23:34:12,438 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
Thanks,
Akshatha
------- Original Message -------
Sender : Sumit Mohanty<[email protected]>
Date : Feb 27, 2014 02:07 (GMT+09:00)
Title : Re: Error installing datanode and tasktracker
CONFIDENTIALITY NOTICE
Looks like datanode start script succeeded but datanode failed soon after.Try to see if datanode is running by checking the process with id in /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid.If not then check the datanode log file at /var/log/hadoop/hdfs/hadoop-hdfs-datanode-*.log. This should say why datanode failed. Similarly, you can check the task tracker log file at /var/log/hadoop as well. Look for log files with trasktracker in the name.Share the errors and we can help debug it. What version of Ambari and stack you are using?-Sumit
On Wed, Feb 26, 2014 at 3:36 AM, AKSHATHA SATHYANARAYAN <[email protected]> wrote:
Hello All,
I am getting the following error while installing datanode and tasktracker. Any help/suggestion is appreciated.
err: /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Service[datanode]/Hdp::Exec[sleep 5; ls /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid` >/dev/null 2>&1]/Exec[sleep 5; ls /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid` >/dev/null 2>&1]/returns: change from notrun to 0 failed: sleep 5; ls /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid` >/dev/null 2>&1 returned 1 instead of one of [0] at /var/lib/ambari-agent/puppet/modules/hdp/manifests/init.pp:487
Thanks,
Akshata
NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.

