This means that a datanode is asked to receive a new block but the block already exists.

One case where I have seen this happen is when Namenode is trying to replicate blocks. Say replication increased from 3 to 20, Namenode asks might ask multiple datanodes to transfer a block to the same datanode. Smaller the cluster the cluster more probability that this happens.

What context are you seeing this?

Torsten Curdt wrote:
So we now finally upgraded to 0.14 (more bug reports to come) but could someone please tell me why we are still seeing this

2007-08-28 08:23:59,836 ERROR org.apache.hadoop.dfs.DataNode: DataXceiver: java.io.IOException: Block blk_-5226553429149640206 is valid, and cannot be written to.
        at org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:515)
at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:822) at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:727)
        at java.lang.Thread.run(Thread.java:595)

...and what it actually means?

cheers
--
Torsten

Reply via email to