[
https://issues.apache.org/jira/browse/HDFS-11251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15751703#comment-15751703
]
Jason Lowe commented on HDFS-11251:
-----------------------------------
The test failed with this stacktrace:
{noformat}
org.apache.hadoop.conf.ReconfigurationException: Could not change property
dfs.datanode.data.dir from
'[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data4'
to
'[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data1,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data2,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data3,[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data4'
at
org.apache.hadoop.hdfs.server.datanode.DataNode.refreshVolumes(DataNode.java:777)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.reconfigurePropertyImpl(DataNode.java:532)
at
org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.addVolumes(TestDataNodeHotSwapVolumes.java:310)
at
org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testAddVolumesDuringWrite(TestDataNodeHotSwapVolumes.java:404)
{noformat}
In the test output I found a CME which appears to be the cause. If so, it'd be
nice if ReconfigurationException relayed the exception that caused the failure.
{noformat}
2016-12-15 00:33:21,848 [pool-239-thread-2] INFO impl.FsDatasetImpl
(FsVolumeList.java:addVolume(320)) - Added new volume:
DS-6c2d1743-ee6f-4011-8042-b47d45d5279b
2016-12-15 00:33:21,848 [pool-239-thread-2] INFO impl.FsDatasetImpl
(FsDatasetImpl.java:addVolume(494)) - Added volume -
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data4,
StorageType: DISK
2016-12-15 00:33:21,851 [Thread-1888] ERROR datanode.DataNode
(DataNode.java:refreshVolumes(764)) - Failed to add volume:
[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/4/dfs/data/data3
java.util.concurrent.ExecutionException:
java.util.ConcurrentModificationException
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.refreshVolumes(DataNode.java:750)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.reconfigurePropertyImpl(DataNode.java:532)
at
org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.addVolumes(TestDataNodeHotSwapVolumes.java:310)
at
org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testAddVolumesDuringWrite(TestDataNodeHotSwapVolumes.java:404)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: java.util.ConcurrentModificationException
at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:901)
at java.util.ArrayList$Itr.next(ArrayList.java:851)
at
org.apache.hadoop.hdfs.server.common.Storage.containsStorageDir(Storage.java:999)
at
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:220)
at
org.apache.hadoop.hdfs.server.datanode.DataStorage.prepareVolume(DataStorage.java:332)
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.addVolume(FsDatasetImpl.java:455)
at
org.apache.hadoop.hdfs.server.datanode.DataNode$2.call(DataNode.java:737)
at
org.apache.hadoop.hdfs.server.datanode.DataNode$2.call(DataNode.java:733)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}
> ConcurrentModificationException during DataNode#refreshVolumes
> --------------------------------------------------------------
>
> Key: HDFS-11251
> URL: https://issues.apache.org/jira/browse/HDFS-11251
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 3.0.0-alpha2
> Reporter: Jason Lowe
>
> The testAddVolumesDuringWrite case failed with a ReconfigurationException
> which appears to have been caused by a ConcurrentModificationException.
> Stacktrace details to follow.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]