Thanks but i know how to kill a process in Linux. But this didn’t answer the 
question why the command say no Datanode to stop instead of stopping the 
Datanode:
 
$HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs 
stop datanode
 
 

 

 

Von: Surbhi Gupta [mailto:[email protected]] 
Gesendet: Samstag, 28. Februar 2015 20:16
An: [email protected]
Betreff: Re: Hadoop 2.6.0 - No DataNode to stop

 

Issue jps and get the process id or 
Try to get the process id of datanode.

Issue ps-fu userid of the user through which datanode is running.

Then kill the process using kill -9

On 28 Feb 2015 09:38, "Daniel Klinger" <[email protected] 
<mailto:[email protected]> > wrote:

Hello,

 

I used a lot of Hadoop-Distributions. Now I’m trying to install a pure Hadoop 
on a little „cluster“ for testing (2 CentOS-VMs: 1 Name+DataNode 1 DataNode). I 
followed the instructions on the Documentation site: 
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html.

 

I’m starting the Cluster like it is described in the Chapter „Operating the 
Hadoop Cluster“(with different users). The starting process works great. The 
PID-Files are created in /var/run and u can see that Folders and Files are 
created in the Data- and NameNode folders. I’m getting no errors in the 
log-files.

 

When I try to stop the cluster all Services are stopped (NameNode, 
ResourceManager etc.). But when I stop the DataNodes I’m getting the message: 
„No DataNode to stop“. The PID-File and the in_use.lock-File are still there 
and if I try to start the DataNode again I’m getting the error that the Process 
is already running. When I stop the DataNode as hdfs instead of root the PID 
and in_use-File are removed but I’m still getting the message: „No DataNode to 
stop“

 

What I’m doing wrong?

 

Greets

dk

Reply via email to