[
https://issues.apache.org/jira/browse/HDFS-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984766#comment-16984766
]
fanghanyun edited comment on HDFS-14986 at 11/29/19 9:16 AM:
-------------------------------------------------------------
Hi,[Aiphago|https://issues.apache.org/jira/secure/ViewProfile.jspa?name=Aiphag0]
hadoop version 2.6.0-cdh5.13.1
public Set<? extends Replica> deepCopyReplica(String bpid) throws IOException {
//Set<? extends Replica> replicas = new HashSet<>(volumeMap.replicas(bpid) ==
null ? Collections.EMPTY_SET
// :volumeMap.replicas(bpid));
Set<? extends Replica> replicas = null;
try (AutoCloseableLock lock = {color:#de350b}datasetLock{color}.acquire())
{ replicas = new HashSet<>(volumeMap.replicas(bpid) == null ? Collections.
EMPTY_SET : volumeMap.replicas(bpid)); }
Cannot solve symbol 'datasetLock'
was (Author: fanghanyun):
hadoop version 2.6.0-cdh5.13.1
public Set<? extends Replica> deepCopyReplica(String bpid) throws IOException {
//Set<? extends Replica> replicas = new HashSet<>(volumeMap.replicas(bpid) ==
null ? Collections.EMPTY_SET
// :volumeMap.replicas(bpid));
Set<? extends Replica> replicas = null;
try (AutoCloseableLock lock = datasetLock.acquire()) {
replicas = new HashSet<>(volumeMap.replicas(bpid) == null ? Collections.
EMPTY_SET : volumeMap.replicas(bpid));
}
Cannot solve symbol 'datasetLock'
> ReplicaCachingGetSpaceUsed throws ConcurrentModificationException
> ------------------------------------------------------------------
>
> Key: HDFS-14986
> URL: https://issues.apache.org/jira/browse/HDFS-14986
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode, performance
> Affects Versions: 2.10.0
> Reporter: Ryan Wu
> Assignee: Aiphago
> Priority: Major
> Fix For: 3.3.0, 2.10.1, 2.11.0
>
> Attachments: HDFS-14986.001.patch, HDFS-14986.002.patch,
> HDFS-14986.003.patch, HDFS-14986.004.patch, HDFS-14986.005.patch,
> HDFS-14986.006.patch
>
>
> Running DU across lots of disks is very expensive . We applied the patch
> HDFS-14313 to get used space from ReplicaInfo in memory.However, new du
> threads throw the exception
> {code:java}
> // 2019-11-08 18:07:13,858 ERROR
> [refreshUsed-/home/vipshop/hard_disk/7/dfs/dn/current/BP-1203969992-XXXX-1450855658517]
>
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
> ReplicaCachingGetSpaceUsed refresh error
> java.util.ConcurrentModificationException: Tree has been modified outside of
> iterator
> at
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)
>
> at
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)
>
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.<init>(HashSet.java:120)
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(FsDatasetImpl.java:1052)
>
> at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed.refresh(ReplicaCachingGetSpaceUsed.java:73)
>
> at
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:178)
>
> at java.lang.Thread.run(Thread.java:748)
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]