[
https://issues.apache.org/jira/browse/HDFS-11284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15790521#comment-15790521
]
Yuanbo Liu edited comment on HDFS-11284 at 1/1/17 4:30 AM:
-----------------------------------------------------------
[~umamaheswararao] Thanks for your response.
* The #2 issue seems to have been addressed in HDFS-11248 by [~rakeshr]. Sorry
my branch didn't contain the patch by then. But I'd like to explicate it a bit.
Say we have these datanodes:
{code}
A => {disk, ssd}. B => {disk, archive}, C => {disk, archive}, D => {ssd,
archive}
{code}
we set the file's storage policy to "COLD", the target assignment woule be {A
=> B, B => B, C => C} because the {{chooseTargetTypeInSameNode}} doesn't have
exclude list to avoid adding the same node.
* The #3 still exists. Say we have these datanodes in our cluster:
{code}
A => {disk, ssd}. B => {disk, archive}, C => {disk, archive}, D => {disk,
archive}
{code}
Then we run these in our cluster:
1. create file and set the replication to 4
2. set the file's storage policy to "COLD"
3. change the replication to 3 and immediately satisfy the storage policy
4. in the end, not all the blocks of the file are moved correctly.
I have a thought on this issue. We can avoid assigning movement task if the
file is under replicated, and we need to add retry code if the movement fails.
was (Author: yuanbo):
[~umamaheswararao] Thanks for your response.
* The #2 issue seems to have been addressed in HDFS-11248 by [~rakeshr]. Sorry
my branch didn't contain the patch by then. But I'd like to explicate it a bit.
Say we have these datanodes:
{code}
A => {disk, ssd}. B => {disk, archive}, C => {disk, archive}, D => {ssd,
archive}
{code}
we set the file's storage policy to "COLD", the target assignment woule be {A
=> B, B => B, C => C}, because the {{chooseTargetTypeInSameNode}} doesn't have
exclude list to avoid adding the same node.
* The #3 still exists. Say we have these datanodes in our cluster:
{code}
A => {disk, ssd}. B => {disk, archive}, C => {disk, archive}, D => {disk,
archive}
{code}
Then we run these in our cluster:
1. create file and set the replication to 4
2. set the file's storage policy to "COLD"
3. change the replication to 3 and immediately satisfy the storage policy
4. in the end, not all the blocks of the file are moved correctly.
I have a thought on this issue. We can avoid assigning movement task if the
file is under replicated, and we need to add retry code if the movement fails.
> [SPS]: Avoid running SPS under safemode and fix issues in target node
> choosing.
> -------------------------------------------------------------------------------
>
> Key: HDFS-11284
> URL: https://issues.apache.org/jira/browse/HDFS-11284
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: datanode, namenode
> Reporter: Yuanbo Liu
> Assignee: Yuanbo Liu
>
> Recently I've found in some conditions, SPS is not stable:
> * SPS runs under safe mode.
> * There're some overlap nodes in the chosen target nodes.
> * The real replication number of block doesn't match the replication factor.
> For example, the real replication is 2 while the replication factor is 3.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]