[
https://issues.apache.org/jira/browse/HDFS-10682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15444774#comment-15444774
]
Fenghua Hu edited comment on HDFS-10682 at 8/29/16 4:25 AM:
In FsDatasetImpl#FsDatasetImpl() and FsDatasetImpl#addVolume():
volumeMap = new ReplicaMap(this);
and
ReplicaMap tempVolumeMap = new ReplicaMap(this);
"this" is used as synchronization object:
ReplicaMap(Object mutex) {
if (mutex == null) {
throw new HadoopIllegalArgumentException(
"Object to synchronize on cannot be null");
}
this.mutex = mutex;
}
ReplicaMap uses synchronized(mutex) {...} for synchronization. Do we need
change it accordingly?
[~vagarychen] [~arpitagarwal]
was (Author: fenghua_hu):
In FsDatasetImpl#FsDatasetImpl() and FsDatasetImpl#addVolume():
volumeMap = new ReplicaMap(this);
and
ReplicaMap tempVolumeMap = new ReplicaMap(this);
"this" is used as synchronization object:
52 ReplicaMap(Object mutex) {
53 if (mutex == null) {
54 throw new HadoopIllegalArgumentException(
55 "Object to synchronize on cannot be null");
56 }
57 this.mutex = mutex;
ReplicaMap uses synchronized(mutex) {...} for synchronization. Do we need
change it accordingly?
[~vagarychen] [~arpitagarwal]
> Replace FsDatasetImpl object lock with a separate lock object
> -
>
> Key: HDFS-10682
> URL: https://issues.apache.org/jira/browse/HDFS-10682
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: datanode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: 2.8.0
>
> Attachments: HDFS-10682-branch-2.001.patch,
> HDFS-10682-branch-2.002.patch, HDFS-10682-branch-2.003.patch,
> HDFS-10682-branch-2.004.patch, HDFS-10682-branch-2.005.patch,
> HDFS-10682-branch-2.006.patch, HDFS-10682.001.patch, HDFS-10682.002.patch,
> HDFS-10682.003.patch, HDFS-10682.004.patch, HDFS-10682.005.patch,
> HDFS-10682.006.patch, HDFS-10682.007.patch, HDFS-10682.008.patch,
> HDFS-10682.009.patch, HDFS-10682.010.patch
>
>
> This Jira proposes to replace the FsDatasetImpl object lock with a separate
> lock object. Doing so will make it easier to measure lock statistics like
> lock held time and warn about potential lock contention due to slow disk
> operations.
> Right now we can use org.apache.hadoop.util.AutoCloseableLock. In the future
> we can also consider replacing the lock with a read-write lock.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org