Hi!

I'm having some trouble with Map/Reduce jobs failing due to HDFS
errors. I've been digging around the logs trying to figure out what's
happening, and I see the following in the datanode logs:

2010-11-19 10:27:01,059 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in
BlockReceiver.lastNodeRun: java.io.IOException: No temporary
file /opera/log4/hadoop/dfs/data/tmp/blk_-8143694940938019938 for block
blk_-8143694940938019938_6144372 at
org.apache.hadoop.hdfs.server.datanode.FSDataset.finalizeBlock(FSDataset.java:1240)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.lastDataNodeRun(BlockReceiver.java:809)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:859)
at java.lang.Thread.run(Thread.java:619) 2010-11-19 10:27:09,170 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode: checkDiskError:
exception: java.io.IOException: No temporary
file /opera/log4/hadoop/dfs/data/tmp/blk_-8143694940938019938 for block
blk_-8143694940938019938_6144372 at
org.apache.hadoop.hdfs.server.datanode.FSDataset.finalizeBlock(FSDataset.java:1240)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.lastDataNodeRun(BlockReceiver.java:809)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:859)
at java.lang.Thread.run(Thread.java:619)

What would be the possible causes of such exceptions?

(This is on Hadoop 0.20.1)

Regards,
\EF
-- 
Erik Forsberg <forsb...@opera.com>
Developer, Opera Software - http://www.opera.com/

Reply via email to