[
https://issues.apache.org/jira/browse/HDFS-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15800624#comment-15800624
]
Yuanbo Liu commented on HDFS-11293:
-----------------------------------
[~umamaheswararao] / [~rakeshr] I tag you here because this situation always
make SPS not stable even without my persistence code. And I don't think this
issue is caused by SPS. It's a common issue. If you have any thoughts about
this JIRA, please let me know, thanks in advance!
> FsDatasetImpl throws ReplicaAlreadyExistsException in a wrong situation
> -----------------------------------------------------------------------
>
> Key: HDFS-11293
> URL: https://issues.apache.org/jira/browse/HDFS-11293
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Yuanbo Liu
> Assignee: Yuanbo Liu
> Priority: Critical
>
> In {{FsDatasetImpl#createTemporary}}, we use {{volumeMap}} to get replica
> info by block pool id. But in this situation:
> {code}
> datanode A => {DISK, SSD}, datanode B => {DISK, ARCHIVE}.
> 1. the same block replica exists in A[DISK] and B[DISK].
> 2. the block pool id of datanode A and datanode B are the same.
> {code}
> Then we start to change the file's storage policy and move the block replica
> in the cluster. Very likely we have to move block from B[DISK] to A[SSD], at
> this time, datanode A throws ReplicaAlreadyExistsException and it's not a
> correct behavior.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]