I am trying to install/configure hadoop on a cluster with several computers. I followed exactly the instructions in the hadoop website for configuring multiple slaves, and when I run start-all.sh I get no errors - both datanode and tasktracker are reported to be running (doing ps awux | grep hadoop on the slave nodes returns two java processes). Also, the log files are empty - nothing is printed there. Still, when I try to use bin/hadoop dfs -put, I get the following error:
# bin/hadoop dfs -put w.txt w.txt put: java.io.IOException: File /user/scohen/w4.txt could only be replicated to 0 nodes, instead of 1 and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it). I couldn't find much information about this error, but I did manage to see somewhere it might mean that there are no datanodes running. But as I said, start-all does not give any errors. Any ideas what could be problem? Thanks. Jerr. -- View this message in context: http://www.nabble.com/%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tf4950939.html#a14175780 Sent from the Hadoop Users mailing list archive at Nabble.com.
