When a block is severely under replicated at creation time, a request for block
replication should be scheduled immediately
---------------------------------------------------------------------------------------------------------------------------
Key: HADOOP-3292
URL: https://issues.apache.org/jira/browse/HADOOP-3292
Project: Hadoop Core
Issue Type: Improvement
Components: dfs
Reporter: Runping Qi
During writing a block to data nodes, if the dfs client detects a bad data node
in the write pipeline, it will re-construct a new data pipeline,
excluding the detected bad data node. This implies that when the client
finishes writing the block, the number of the replicas for the block
may be lower than the intended replication factor. If the ratio of the number
of replicas to the intended replication factor is lower than
certain threshold (say 0.68), then the client should send a request to the name
node to replicate that block immediately.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.