[
https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14319503#comment-14319503
]
Yongjun Zhang commented on HDFS-6133:
-------------------------------------
Another factor is how many blocks a file have, since the computation happens
once for each block of the file. Assume on average a file has 1k blocks, then
the computation is 10M to write one file. If we construct a hash set, the
computation complexiity is 40k. So the diff is 40k vs 10M.
I think we should consider optimizing. Hi [~szetszwo], what do you think?
Thanks.
> Make Balancer support exclude specified path
> --------------------------------------------
>
> Key: HDFS-6133
> URL: https://issues.apache.org/jira/browse/HDFS-6133
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: balancer & mover, datanode
> Reporter: zhaoyunjiong
> Assignee: zhaoyunjiong
> Fix For: 2.7.0
>
> Attachments: HDFS-6133-1.patch, HDFS-6133-10.patch,
> HDFS-6133-11.patch, HDFS-6133-2.patch, HDFS-6133-3.patch, HDFS-6133-4.patch,
> HDFS-6133-5.patch, HDFS-6133-6.patch, HDFS-6133-7.patch, HDFS-6133-8.patch,
> HDFS-6133-9.patch, HDFS-6133.patch
>
>
> Currently, run Balancer will destroying Regionserver's data locality.
> If getBlocks could exclude blocks belongs to files which have specific path
> prefix, like "/hbase", then we can run Balancer without destroying
> Regionserver's data locality.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)