[ 
https://issues.apache.org/jira/browse/HDDS-1680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16863050#comment-16863050
 ] 

Mukul Kumar Singh commented on HDDS-1680:
-----------------------------------------

Thanks [~elek]. Another one to this list. 
https://builds.apache.org/job/hadoop-multibranch/job/PR-960/1/testReport/org.apache.hadoop.ozone.container.common.impl/TestHddsDispatcher/testContainerCloseActionWhenFull/


> Create missing parent directories during the creation of HddsVolume dirs
> ------------------------------------------------------------------------
>
>                 Key: HDDS-1680
>                 URL: https://issues.apache.org/jira/browse/HDDS-1680
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>            Reporter: Elek, Marton
>            Assignee: Elek, Marton
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> I started to execute all the unit tests continuously (in kubernetes with argo 
> workflow).
> Until now I got the following failures (number of failures / unit test name):
> ```
>       1 org.apache.hadoop.fs.ozone.contract.ITestOzoneContractMkdir
>       1 org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
>       3 
> org.apache.hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
>      31 org.apache.hadoop.ozone.container.common.TestDatanodeStateMachine
>      31 org.apache.hadoop.ozone.container.common.volume.TestVolumeSet
>       1 org.apache.hadoop.ozone.freon.TestDataValidateWithSafeByteOperations
> ```
> TestVolumeSet is also failed locally:
> {code}
> 2019-06-13 14:23:18,637 ERROR volume.VolumeSet 
> (VolumeSet.java:initializeVolumeSet(184)) - Failed to parse the storage 
> location: 
> /home/elek/projects/hadoop/hadoop-hdds/container-service/target/test-dir/dfs
> java.io.IOException: Cannot create directory 
> /home/elek/projects/hadoop/hadoop-hdds/container-service/target/test-dir/dfs/hdds
>       at 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume.initialize(HddsVolume.java:208)
>       at 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume.<init>(HddsVolume.java:179)
>       at 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume.<init>(HddsVolume.java:72)
>       at 
> org.apache.hadoop.ozone.container.common.volume.HddsVolume$Builder.build(HddsVolume.java:156)
>       at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.createVolume(VolumeSet.java:311)
>       at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.initializeVolumeSet(VolumeSet.java:165)
>       at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.<init>(VolumeSet.java:130)
>       at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.<init>(VolumeSet.java:109)
>       at 
> org.apache.hadoop.ozone.container.common.volume.TestVolumeSet.testFailVolumes(TestVolumeSet.java:232)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>       at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>       at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>       at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>       at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>       at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>       at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}
> The problem here is that the parent directory of the volume dir is missing. I 
> propose to use hddsRootDir.mkdirs() instead of hddsRootDir.mkdir() which 
> creates the missing parent directories.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to