[
https://issues.apache.org/jira/browse/HDFS-11946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16042098#comment-16042098
]
Tsz Wo Nicholas Sze commented on HDFS-11946:
--------------------------------------------
Here is an example. The first datanode 127.0.0.1:58976 was able to create
container f3972a31-3587-4baf-b1dd-eb3d41d5aad2 but the other datanodes
127.0.0.1:58966 and 127.0.0.1:58971 failed with "container already exists on
disk". It seems that the container paths are independent of datanode
information.
- container path:
/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/MiniOzoneClusteraf64005d-677e-4bb8-a54b-c03c94896214/5d170ac6-dbc3-41e9-aa86-dc9d1416453b/scm/repository/f3972a31-3587-4baf-b1dd-eb3d41d5aad2.container
{code}
2017-06-08 10:18:42,712 [StateMachineUpdater-127.0.0.1:58976] INFO -
Created of a new container. File:
/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/MiniOzoneClusteraf64005d-677e-4bb8-a54b-c03c94896214/5d170ac6-dbc3-41e9-aa86-dc9d1416453b/scm/repository/f3972a31-3587-4baf-b1dd-eb3d41d5aad2.container
2017-06-08 10:18:42,736 [StateMachineUpdater-127.0.0.1:58966] ERROR -
container already exists on disk. File:
/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/MiniOzoneClusteraf64005d-677e-4bb8-a54b-c03c94896214/5d170ac6-dbc3-41e9-aa86-dc9d1416453b/scm/repository/f3972a31-3587-4baf-b1dd-eb3d41d5aad2.container
2017-06-08 10:18:42,736 [StateMachineUpdater-127.0.0.1:58971] ERROR -
container already exists on disk. File:
/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/MiniOzoneClusteraf64005d-677e-4bb8-a54b-c03c94896214/5d170ac6-dbc3-41e9-aa86-dc9d1416453b/scm/repository/f3972a31-3587-4baf-b1dd-eb3d41d5aad2.container
{code}
{code}
2017-06-08 10:18:42,738 [StateMachineUpdater-127.0.0.1:58971] ERROR -
Creation of container failed. Name: f3972a31-3587-4baf-b1dd-eb3d41d5aad2, we
might need to cleanup partially created artifacts.
org.apache.hadoop.fs.FileAlreadyExistsException: container already exists on
disk.
at
org.apache.hadoop.ozone.container.common.helpers.ContainerUtils.verifyIsNewContainer(ContainerUtils.java:198)
at
org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.writeContainerInfo(ContainerManagerImpl.java:325)
at
org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.createContainer(ContainerManagerImpl.java:263)
at
org.apache.hadoop.ozone.container.common.impl.Dispatcher.handleCreateContainer(Dispatcher.java:395)
at
org.apache.hadoop.ozone.container.common.impl.Dispatcher.containerProcessHandler(Dispatcher.java:156)
at
org.apache.hadoop.ozone.container.common.impl.Dispatcher.dispatch(Dispatcher.java:103)
at
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatch(ContainerStateMachine.java:94)
at
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:81)
at
org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:913)
at
org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:142)
at java.lang.Thread.run(Thread.java:748)
{code}
> Ozone: Containers in different datanodes are mapped to the same location
> ------------------------------------------------------------------------
>
> Key: HDFS-11946
> URL: https://issues.apache.org/jira/browse/HDFS-11946
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: ozone
> Reporter: Tsz Wo Nicholas Sze
> Assignee: Anu Engineer
>
> This is a problem in unit tests. Containers with the same container name in
> different datanodes are mapped to the same local path location. As a result,
> the first datanode will be able to succeed creating the container file but
> the remaining datanodes will fail to create the container file with
> FileAlreadyExistsException.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]