[ 
https://issues.apache.org/jira/browse/HDFS-16275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18040104#comment-18040104
 ] 

ASF GitHub Bot commented on HDFS-16275:
---------------------------------------

github-actions[bot] commented on PR #3556:
URL: https://github.com/apache/hadoop/pull/3556#issuecomment-3567203559

   We're closing this stale PR because it has been open for 100 days with no 
activity. This isn't a judgement on the merit of the PR in any way. It's just a 
way of keeping the PR queue manageable.
   If you feel like this was a mistake, or you would like to continue working 
on it, please feel free to re-open it and ask for a committer to remove the 
stale tag and review again.
   Thanks all for your contribution.




> [HDFS] Enable considerLoad for localWrite
> -----------------------------------------
>
>                 Key: HDFS-16275
>                 URL: https://issues.apache.org/jira/browse/HDFS-16275
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Janus Chow
>            Assignee: Janus Chow
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently when client is on the same machine of a datanode, it will try to 
> write to the local machine regardless of the load of the datanode, that is 
> the xceiverCount.
> In our production cluster, datanode and Nodemanager are running on the same 
> server, so when there are heavy jobs running on a labeled queue, the 
> corresponding datanodes will have higher xceiverCounts than other datanodes. 
> When other clients are trying to write, the exception of "could only be 
> replicated to 0 nodes" would be thrown.
> This ticket is to enable considerLoad to avoid the hot local write.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to