I found the problem. It is because the system disk error. Then the whole "/" directory became read-only. When I copyFromLocal, it will use local /tmp directory as buffer. However, Hadoop does not know it is read-only. That is why it reported datanode problem.
On Mon, Sep 27, 2010 at 10:34 AM, He Chen <[email protected]> wrote: > Thanks, but I think you goes too far to focus on the problem itself. > > > On Sun, Sep 26, 2010 at 11:43 AM, Nan Zhu <[email protected]> wrote: > >> Have you ever check the log file in the directory? >> >> I always find some important information there, >> >> I suggest you to recompile hadoop with ant since mapred daemons also don't >> work >> >> Nan >> >> On Sun, Sep 26, 2010 at 7:29 PM, He Chen <[email protected]> wrote: >> >> > The problem is every datanode may be listed in the error report. That >> means >> > all my datanodes are bad? >> > >> > One thing I forgot to mention. I can not use start-all.sh and >> stop-all.sh >> > to >> > start and stop all dfs and mapred processes on my clusters. But the >> > jobtracker and namenode web interface still work. >> > >> > I think I can solve this problem by ssh to every node and kill current >> > hadoop processes and restart them again. The previous problem will also >> be >> > solved( it's my opinion). But I really want to know why the HDFS reports >> me >> > previous errors. >> > >> > >> > On Sat, Sep 25, 2010 at 11:20 PM, Nan Zhu <[email protected]> wrote: >> > >> > > Hi Chen, >> > > >> > > It seems that you have a bad datanode? maybe you should reformat them? >> > > >> > > Nan >> > > >> > > On Sun, Sep 26, 2010 at 10:42 AM, He Chen <[email protected]> wrote: >> > > >> > > > Hello Neil >> > > > >> > > > No matter how big the file is. It always report this to me. The file >> > size >> > > > is >> > > > from 10KB to 100MB. >> > > > >> > > > On Sat, Sep 25, 2010 at 6:08 PM, Neil Ghosh <[email protected]> >> > > wrote: >> > > > >> > > > > How Big is the file? Did you try Formatting Name node and >> Datanode? >> > > > > >> > > > > On Sun, Sep 26, 2010 at 2:12 AM, He Chen <[email protected]> >> wrote: >> > > > > >> > > > > > Hello everyone >> > > > > > >> > > > > > I can not load local file to HDFS. It gave the following errors. >> > > > > > >> > > > > > WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor exception >> > for >> > > > > block >> > > > > > blk_-236192853234282209_419415java.io.EOFException >> > > > > > at >> > java.io.DataInputStream.readFully(DataInputStream.java:197) >> > > > > > at >> > java.io.DataInputStream.readLong(DataInputStream.java:416) >> > > > > > at >> > > > > > >> > > > > > >> > > > > >> > > > >> > > >> > >> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2397) >> > > > > > 10/09/25 15:38:25 WARN hdfs.DFSClient: Error Recovery for block >> > > > > > blk_-236192853234282209_419415 bad datanode[0] >> 192.168.0.23:50010 >> > > > > > 10/09/25 15:38:25 WARN hdfs.DFSClient: Error Recovery for block >> > > > > > blk_-236192853234282209_419415 in pipeline 192.168.0.23:50010, >> > > > > > 192.168.0.39:50010: bad datanode 192.168.0.23:50010 >> > > > > > Any response will be appreciated! >> > > > > > >> > > > > > >> > > >> > >> > > > > -- > Best Wishes! > 顺送商祺! > > -- > Chen He > (402)613-9298 > PhD. student of CSE Dept. > Research Assistant of Holland Computing Center > University of Nebraska-Lincoln > Lincoln NE 68588 >
