[
https://issues.apache.org/jira/browse/HDFS-7270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14179406#comment-14179406
]
Colin Patrick McCabe commented on HDFS-7270:
--------------------------------------------
bq. Deriving the right configuration for the throttler to balance between the
stability and throughput of the pipeline, however, is difficult in practice.
The loads of the clusters varies from time to time, and the DNs can go ups and
downs which can make the configuration suboptimal thus defeat its purpose.
I agree... it seems like doing something like exponential backoff would get us
a lot of the benefits of backpressure without requiring explicit configuration.
The cluster workload can also change over time, so it will be difficult for a
static configuration to be effective.
> Implementing congestion control in writing pipeline
> ---------------------------------------------------
>
> Key: HDFS-7270
> URL: https://issues.apache.org/jira/browse/HDFS-7270
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: datanode
> Reporter: Haohui Mai
> Assignee: Haohui Mai
>
> When a client writes to HDFS faster than the disk bandwidth of the DNs, it
> saturates the disk bandwidth and put the DNs unresponsive. The client only
> backs off by aborting / recovering the pipeline, which leads to failed writes
> and unnecessary pipeline recovery.
> This jira proposes to add explicit congestion control mechanisms in the
> writing pipeline.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)