[
https://issues.apache.org/jira/browse/HDFS-11716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15990152#comment-15990152
]
Weiwei Yang commented on HDFS-11716:
------------------------------------
Found one more issue when testing delete API, it only checks if container is
open when force option is used, in {{Dispatcher#handleDeleteContainer}}
{code}
if (forceDelete) {
if (this.containerManager.isOpen(pipeline.getContainerName())) {
throw new StorageContainerException("Attempting to force delete "
+ "an open container.", UNCLOSED_CONTAINER_IO);
}
}
{code}
see more in HDFS-11581, that seems an incorrect check. Because if a container
is {{open}} and we try to delete without force option, that seems to work. Cc
[~yuanbo], please take a look.
> Ozone: SCM: CLI: Revisit delete container API
> ---------------------------------------------
>
> Key: HDFS-11716
> URL: https://issues.apache.org/jira/browse/HDFS-11716
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: ozone
> Reporter: Weiwei Yang
> Assignee: Weiwei Yang
>
> Current delete container API seems can be possibly running into inconsistent
> state. SCM maintains a mapping of container to nodes, datanode maintains the
> actual container's data. When deletes a container, we need to make sure db is
> removed as well as the mapping in SCM also gets updated. What if the datanode
> failed to remove stuff for a container, do we update the mapping? We need to
> revisit the implementation and get these issues addressed. See more
> discussion
> [here|https://issues.apache.org/jira/browse/HDFS-11675?focusedCommentId=15987798&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15987798].
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]