Hello,
I used a lot of Hadoop-Distributions. Now I'm trying to install a pure Hadoop on a little "cluster" for testing (2 CentOS-VMs: 1 Name+DataNode 1 DataNode). I followed the instructions on the Documentation site: http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Clus terSetup.html. I'm starting the Cluster like it is described in the Chapter "Operating the Hadoop Cluster"(with different users). The starting process works great. The PID-Files are created in /var/run and u can see that Folders and Files are created in the Data- and NameNode folders. I'm getting no errors in the log-files. When I try to stop the cluster all Services are stopped (NameNode, ResourceManager etc.). But when I stop the DataNodes I'm getting the message: "No DataNode to stop". The PID-File and the in_use.lock-File are still there and if I try to start the DataNode again I'm getting the error that the Process is already running. When I stop the DataNode as hdfs instead of root the PID and in_use-File are removed but I'm still getting the message: "No DataNode to stop" What I'm doing wrong? Greets dk
