sodonnel opened a new pull request, #5300:
URL: https://github.com/apache/ozone/pull/5300
## What changes were proposed in this pull request?
The reported issue had a datanode with 2 disks:
```
/dev/nvme2n1 985G 935G 0 100% /ozone-data
/dev/nvme4n1 985G 453G 482G 49% /ozone-data1
```
One disk is full, but the create container call still failed with this stack:
```
2020-02-13 10:58:01,097 ERROR
org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil:
Unable to create directory for metadata storage. Path:
/ozone-data/hdds/cf793a84-8529-4897-8bf9-f18ec79e8ad6/current/containerDir1/536/metadata
2020-02-13 10:58:01,097 INFO
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler: Operation:
CreateContainer , Trace ID: , Message: Container creation failed. Unable to
create directory for metadata storage. Path:
/ozone-data/hdds/cf793a84-8529-4897-8bf9-f18ec79e8ad6/current/containerDir1/536/metadata
, Result: CONTAINER_INTERNAL_ERROR , StorageContainerException Occurred.
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
Container creation failed. Unable to create directory for metadata storage.
Path:
/ozone-data/hdds/cf793a84-8529-4897-8bf9-f18ec79e8ad6/current/containerDir1/536/metadata
at
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(KeyValueContainer.java:177)
at
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleCreateContainer(KeyValueHandler.java:244)
at
org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:164)
at
org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.createContainer(HddsDispatcher.java:412)
at
org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(HddsDispatcher.java:248)
at
org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:162)
at
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:396)
at
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:406)
at
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$handleWriteChunk$2(ContainerStateMachine.java:441)
at
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Unable to create directory for metadata
storage. Path:
/ozone-data/hdds/cf793a84-8529-4897-8bf9-f18ec79e8ad6/current/containerDir1/536/metadata
at
org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.createContainerMetaData(KeyValueContainerUtil.java:73)
at
org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(KeyValueContainer.java:142)
... 12 more
```
The reason is that the volume choosing policy filters out disks it knows are
full, but if there is some lag or mis-calculation in the free space, the disk
could be full and the volume policy may not know this yet. Or, it could have
quickly filled before it can be used.
Further, if the disk is bad, any IO error would not be caught and the
container creation would fail.
This change lets other disks get tried by removing the "failed disk" from
section and trying again until there are no further disks left to try.
## What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-3009
## How was this patch tested?
New unit test to reproduce and validate the fix.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]