Hi list,

I am implementing a datanode failover for writing in HDFS, that HDFS can still 
write a block when the first datanode of the block fails.

The design is. First, the failure node would be identified. Second, a new block 
is requested by AddblockRequest. The HDFS AddblockRequest api provides 
excludeNodes, which I used to tell Namenode not to allocate new block on failed 
datanodes. failedDatanodes are identified failed datanodes, and in the logs the 
failed datanodes are correct. Third, abandon the previous allocated block and 
use new block instead.

    req := &hdfs.AddBlockRequestProto{
        Src:           proto.String(bw.src),
        ClientName:    proto.String(bw.clientName),
        ExcludeNodes:  failedDatanodes,
    }

But, the namenode still locates the block to the failed datanodes.
Anyone knows why? Did I miss anything here?

Thank you.
Best
Junjie Qian

Reply via email to