[ 
https://issues.apache.org/jira/browse/HDFS-12302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16136129#comment-16136129
 ] 

liaoyuxiangqin edited comment on HDFS-12302 at 1/31/18 8:51 AM:
----------------------------------------------------------------

[~xiaochen] Thanks for you review, could you help me see if this problem really 
exists? thanks!


was (Author: liaoyuxiangqin):
[~drankye] Thanks for you review, could you help me see if this problem really 
exists? thanks!

> FSVolume's getVolumeMap actually do nothing when Instantiate a FsDatasetImpl 
> object
> -----------------------------------------------------------------------------------
>
>                 Key: HDFS-12302
>                 URL: https://issues.apache.org/jira/browse/HDFS-12302
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode
>    Affects Versions: 3.0.0-alpha4
>         Environment: cluster: 3 nodes
> os:(Red Hat 2.6.33.20, Red Hat 3.10.0-514.6.1.el7.x86_64, 
> Ubuntu4.4.0-31-generic)
> hadoop version: hadoop-3.0.0-alpha4
>            Reporter: liaoyuxiangqin
>            Assignee: liaoyuxiangqin
>            Priority: Major
>         Attachments: HDFS-12302.001.patch, HDFS-12302.002.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
>       When i read the  code of Instantiate FsDatasetImpl object on DataNode 
> start process, i find that the getVolumeMap function actually can't get 
> ReplicaMap info for each fsVolume, the reason is fsVolume's  bpSlices hasn't 
> been initialized in this time, the detail code as follows:
> {code:title=FsVolumeImpl.java}
> void getVolumeMap(ReplicaMap volumeMap,
>                     final RamDiskReplicaTracker ramDiskReplicaMap)
>       throws IOException {
>     LOG.info("Added volume -  getVolumeMap bpSlices:" + 
> bpSlices.values().size());
>     for(BlockPoolSlice s : bpSlices.values()) {
>       s.getVolumeMap(volumeMap, ramDiskReplicaMap);
>     }
>   }
> {code}
> Then, i have add some info log and start DataNode, the log info cord with the 
> code description, the detail log info as follows:
>  INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK, getVolumeMap end
> INFO: Added new volume: DS-48ac6ef9-fd6f-49b7-a5fb-77b82cadc973
> INFO: Added volume - [DISK]file:/home/data2/hadoop/hdfs/data, StorageType: 
> DISK
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> begin
> INFO {color:red}Added volume - getVolumeMap bpSlices:0{color}
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK, getVolumeMap 
> end
> INFO: Added new volume: DS-159b615c-144c-4d99-8b63-5f37247fb8ed
> INFO: Added volume - [DISK]file:/hdfs/data, StorageType: DISK
> At last i think the getVolumeMap process for each fsVloume not necessary when 
> Instantiate FsDatasetImpl object.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to