[
https://issues.apache.org/jira/browse/HDFS-12302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129183#comment-16129183
]
Chen Liang commented on HDFS-12302:
-----------------------------------
[~liaoyuxiangqin] hmm...this is not as straightforward as it may seem, this
basically comes down to what are the time points when
{{FsVolumeImpl#addBlockPool}} gets called, this is where entries get added to
{{bqSlices}}. Things still need to be verified, I need to look more closely,
but there two comments for now:
1. Seems in the patch a null {{ReplicaMap}} is passed to {{activateVolume}}.
The thing is, in activateVolume, it has this line
{code}
volumeMap.addAll(replicaMap);
{code}
which is
{code}
void addAll(ReplicaMap other) {
map.putAll(other.map);
}
{code}
So it seems to me that passing null ReplicaMap will trigger a
NullPointerException when calling {{other.map}}.
2. The only place this following method
{code}
void getVolumeMap(ReplicaMap volumeMap,
final RamDiskReplicaTracker ramDiskReplicaMap)
{code}
gets called is the place that's being removed in the patch. So after the patch,
this method becomes redundant, if we do want to make the change as in the
patch, then we should consider also remove this method too. But again, things
still need to be verified...
> FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl
> object
> -----------------------------------------------------------------------------------
>
> Key: HDFS-12302
> URL: https://issues.apache.org/jira/browse/HDFS-12302
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: datanode
> Affects Versions: 3.0.0-alpha4
> Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20, Red Hat 3.10.0-514.6.1.el7.x86_64,
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
> Reporter: liaoyuxiangqin
> Assignee: liaoyuxiangqin
> Attachments: HDFS-12302.001.patch, HDFS-12302.002.patch
>
> Original Estimate: 48h
> Remaining Estimate: 48h
>
> When i read the code of Instantiate FsDatasetImpl object on DataNode
> start process, i find that the getVolumeMap function actually can't get
> ReplicaMap info for each fsVolume, the reason is fsVolume's bpSlices hasn't
> been initialized in this time, the detail code as follows:
> {code:title=FsVolumeImpl.java}
> void getVolumeMap(ReplicaMap volumeMap,
> final RamDiskReplicaTracker ramDiskReplicaMap)
> throws IOException {
> LOG.info("Added volume - getVolumeMap bpSlices:" +
> bpSlices.values().size());
> for(BlockPoolSlice s : bpSlices.values()) {
> s.getVolumeMap(volumeMap, ramDiskReplicaMap);
> }
> }
> {code}
> Then, i have add some info log and start DataNode, the log info cord with the
> code description, the detail log info as follows:
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType:
> DISK, getVolumeMap begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType:
> DISK, getVolumeMap end
> INFO: Added new volume: DS-48ac6ef9-fd6f-49b7-a5fb-77b82cadc973
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType:
> DISK
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap
> begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap
> end
> INFO: Added new volume: DS-159b615c-144c-4d99-8b63-5f37247fb8ed
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK
> At last i think the getVolumeMap process for each fsVloume not necessary when
> Instantiate FsDatasetImpl object.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]