This happened to me too. What I did was deleting the files generated by formating the namenode. By default, they are under /tmp/hadoop****, just delete the hadoop*** directory and re-format the namenode. If you specify another location in your config file, go to that location and delete the corresponding directory.
If your last job did not end correctly, you will have this kind of problems. Hope it could help. Boyu Zhang Ph. D. Student Computer and Information Sciences Department University of Delaware (210) 274-2104 [email protected] http://www.eecis.udel.edu/~bzhang -----Original Message----- From: Anthony.Fan [mailto:[email protected]] Sent: Monday, July 13, 2009 6:21 AM To: [email protected] Subject: could only be replicated to 0 nodes, instead of 1 Hi, All I just start to use Hadoop few days ago. I met the error message " WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/count/count/temp1 could only be replicated to 0 nodes, instead of 1" while trying to copy data files to DFS after Hadoop is started. I did all the settings according to the "Running_Hadoop_On_Ubuntu_Linux_(Single-Node_Cluster)"'s instruction, and I don't know what's wrong. Besides, during the process, no error message is written to log files. Also, according to "http://localhost.localdomain:50070/dfshealth.jsp", I have one live namenode. By the broswer, I even can see the first data file is created in DFS, but the size of it is 0. Things I've tried: 1. Stop hadoop, re-format DFS and start hadoop again. 2. Change "localhost" to "127.0.0.1" But neigher of them works. Could anyone help me or give me a hint? Thanks. Anthony -- View this message in context: http://www.nabble.com/could-only-be-replicated-to-0-nodes%2C-instead-of-1-tp 24459104p24459104.html Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
