[
https://issues.apache.org/jira/browse/HDFS-7270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14308323#comment-14308323
]
Haohui Mai commented on HDFS-7270:
----------------------------------
bq. The problem is changing the existing protobuf Status enum record tag from
an enum to a uint32. That pretty much violates the compatibility promise that
protobufs are supposed to provide.
Let me try to understand a little bit more. Both enum and a uint32 are encoded
as a varint32 over the wire. Can you clarify what compatibility means in your
mind? Do your use cases fall into the categories in the ones that are mentioned
by [~sureshms]?
> Add congestion signaling capability to DataNode write protocol
> --------------------------------------------------------------
>
> Key: HDFS-7270
> URL: https://issues.apache.org/jira/browse/HDFS-7270
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: datanode
> Reporter: Haohui Mai
> Assignee: Haohui Mai
> Attachments: HDFS-7270.000.patch, HDFS-7270.001.patch,
> HDFS-7270.002.patch, HDFS-7270.003.patch, HDFS-7270.004.patch
>
>
> When a client writes to HDFS faster than the disk bandwidth of the DNs, it
> saturates the disk bandwidth and put the DNs unresponsive. The client only
> backs off by aborting / recovering the pipeline, which leads to failed writes
> and unnecessary pipeline recovery.
> This jira proposes to add explicit congestion control mechanisms in the
> writing pipeline.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)