[
https://issues.apache.org/jira/browse/HDFS-16093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17371298#comment-17371298
]
Stephen O'Donnell commented on HDFS-16093:
------------------------------------------
I'm not sure it we can simply removed them. There is also a distinction between
DECOMMISSIONING and DECOMMISSIONED.
It is possible for all 3 replicas of a file to be on DECOMMISSIONING host, and
therefore it can only be read if those hosts are returned.
For DECOMMISSIONED hosts which are alive and not stale, I think they can be
used for reads in some circumstances. I recall seeing some comments in the code
suggesting DECOMMISSIONED replicas can be used as a "last resort".
> DataNodes under decommission will still be returned to the client via
> getLocatedBlocks, so the client may request decommissioning datanodes to read
> which will cause badly competation on disk IO.
> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-16093
> URL: https://issues.apache.org/jira/browse/HDFS-16093
> Project: Hadoop HDFS
> Issue Type: Improvement
> Affects Versions: 3.3.1
> Reporter: Daniel Ma
> Priority: Critical
>
> DataNodes under decommission will still be returned to the client via
> getLocatedBlocks, so the client may request decommissioning datanodes to read
> which will cause badly competation on disk IO.
> Therefore, datanodes under decommission should be removed from the return
> list of getLocatedBlocks api.
> !image-2021-06-29-10-50-44-739.png!
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]