[
https://issues.apache.org/jira/browse/HDFS-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15263428#comment-15263428
]
Mingliang Liu commented on HDFS-10335:
--------------------------------------
Failing tests are not related. Specially,
{{hadoop.hdfs.TestRollingUpgradeRollback}} fails because of port in use.
{{hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl}} is a known bug
which is tracked by [HDFS-10260].
We did not add new test as the code path is covered by existing tests. We
manually tested the patch and the Mover was ~60X faster than before, though
it's not a general case as all its ARCHIVE datanodes are newly added to the
same rack.
> Mover$Processor#chooseTarget() always chooses the first matching target
> storage group
> -------------------------------------------------------------------------------------
>
> Key: HDFS-10335
> URL: https://issues.apache.org/jira/browse/HDFS-10335
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: balancer & mover
> Affects Versions: 2.8.0
> Reporter: Mingliang Liu
> Assignee: Mingliang Liu
> Priority: Critical
> Attachments: HDFS-10335.000.patch, HDFS-10335.000.patch
>
>
> Currently the
> {{org.apache.hadoop.hdfs.server.mover.Mover$Processor#chooseTarget()}} always
> chooses the first matching target datanode from the candidate list. This may
> make the mover schedule a lot of task to a few of the datanodes (first
> several datanodes of the candidate list). The overall performance will suffer
> significantly from this because of the saturated network/disk usage.
> Specially, if the {{dfs.datanode.balance.max.concurrent.moves}} is set, the
> scheduled move task will be queued on a few of the storage group, regardless
> of other available storage groups. We need an algorithm which can distribute
> the move tasks approximately even across all the candidate target storage
> groups.
> Thanks [~szetszwo] for offline discussion.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]