[ 
https://issues.apache.org/jira/browse/HDFS-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Wu updated HDFS-14986:
---------------------------
    Description: 
Running DU across lots of disks is very expensive . We applied the patch 
HDFS-14313 to get  used space from ReplicaInfo in memory.However, new du 
threads throw the exception
{code:java}
// 2019-11-08 18:07:13,858 ERROR 
[refreshUsed-/home/vipshop/hard_disk/7/dfs/dn/current/BP-1203969992-XXXX-1450855658517]
 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
 ReplicaCachingGetSpaceUsed refresh 
errorjava.util.ConcurrentModificationException: Tree has been modified outside 
of iterator    at 
org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)
    at 
org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)
    at java.util.AbstractCollection.addAll(AbstractCollection.java:343)    at 
java.util.HashSet.<init>(HashSet.java:120)    at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(FsDatasetImpl.java:1052)
    at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed.refresh(ReplicaCachingGetSpaceUsed.java:73)
    at 
org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:178)
    at java.lang.Thread.run(Thread.java:748)
{code}

  was:
Running DU across lots of disks is very expensive . We applied the patch 
HDFS-14313 to get  used space from ReplicaInfo in memory.However, new du 
threads throw the exception
{code:java}
// 2019-11-08 18:07:13,858 ERROR 
[refreshUsed-/home/vipshop/hard_disk/7/dfs/dn/current/BP-1203969992-10.208.50.21-1450855658517]
 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
 ReplicaCachingGetSpaceUsed refresh 
errorjava.util.ConcurrentModificationException: Tree has been modified outside 
of iterator    at 
org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)
    at 
org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)
    at java.util.AbstractCollection.addAll(AbstractCollection.java:343)    at 
java.util.HashSet.<init>(HashSet.java:120)    at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(FsDatasetImpl.java:1052)
    at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed.refresh(ReplicaCachingGetSpaceUsed.java:73)
    at 
org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:178)
    at java.lang.Thread.run(Thread.java:748)
{code}


> ReplicaCachingGetSpaceUsed throws  ConcurrentModificationException
> ------------------------------------------------------------------
>
>                 Key: HDFS-14986
>                 URL: https://issues.apache.org/jira/browse/HDFS-14986
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode, performance
>            Reporter: Ryan Wu
>            Assignee: Ryan Wu
>            Priority: Major
>
> Running DU across lots of disks is very expensive . We applied the patch 
> HDFS-14313 to get  used space from ReplicaInfo in memory.However, new du 
> threads throw the exception
> {code:java}
> // 2019-11-08 18:07:13,858 ERROR 
> [refreshUsed-/home/vipshop/hard_disk/7/dfs/dn/current/BP-1203969992-XXXX-1450855658517]
>  
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
>  ReplicaCachingGetSpaceUsed refresh 
> errorjava.util.ConcurrentModificationException: Tree has been modified 
> outside of iterator    at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)
>     at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)
>     at java.util.AbstractCollection.addAll(AbstractCollection.java:343)    at 
> java.util.HashSet.<init>(HashSet.java:120)    at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(FsDatasetImpl.java:1052)
>     at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed.refresh(ReplicaCachingGetSpaceUsed.java:73)
>     at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:178)
>     at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to