[ 
https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15491523#comment-15491523
 ] 

Xiaobing Zhou edited comment on HDFS-10824 at 9/15/16 5:37 PM:
---------------------------------------------------------------

Thanks for review [~anu]. v002 is posted.
1. The member is named as storageCap to avoid edits. storageCapacities in 
function startDataNodes is intended for starting additional DNs. so memorizing 
capacity is changed accordingly. 
2. It's better not to remove storageCapacities parameters, since startDataNodes 
is designed to start additional DNs in on-going cluster by providing diff 
capacities.
3. tiggerHeartbeat is to wait for for local DN storage to be initialized after 
block pool has successfully connected to its NN. See also 
DataNode#runDatanodeDaemon -> blockPoolManager.startAll() --> 
BPOfferService.start --> BPServiceActor.start --> BPServiceActor.run 
-->BPServiceActor.connectToNNAndHandshake, storage initialization is triggered 
async. tiggerHeartbeat is necessary in this case, although triggerBlock not.
4. passed different capacities.


was (Author: xiaobingo):
Thanks for review [~anu]. v002 is posted.
1. The member is named as storageCap to avoid edits. storageCapacities in 
function startDataNodes is intended for starting additional DNs. so memorizing 
capacity is changed accordingly. 
2. It's better not to remove storageCapacities parameters, since startDataNodes 
is designed to start additional in on-going cluster by providing diff 
capacities.
3. tiggerHeartbeat is to wait for for local DN storage to be initialized after 
block pool has successfully connected to its NN. See also 
DataNode#runDatanodeDaemon -> blockPoolManager.startAll() --> 
BPOfferService.start --> BPServiceActor.start --> BPServiceActor.run 
-->BPServiceActor.connectToNNAndHandshake, storage initialization is triggered 
async. tiggerHeartbeat is necessary in this case, although triggerBlock not.
4. passed different capacities.

> MiniDFSCluster#storageCapacities has no effects on real capacity
> ----------------------------------------------------------------
>
>                 Key: HDFS-10824
>                 URL: https://issues.apache.org/jira/browse/HDFS-10824
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Xiaobing Zhou
>            Assignee: Xiaobing Zhou
>         Attachments: HDFS-10824.000.patch, HDFS-10824.001.patch, 
> HDFS-10824.002.patch
>
>
> It has been noticed MiniDFSCluster#storageCapacities has no effects on real 
> capacity. It can be reproduced by explicitly setting storageCapacities and 
> then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to 
> compare results. The following are  storage report for one node with two 
> volumes after I set capacity as 300 * 1024. Apparently, the capacity is not 
> changed.
> adminState|DatanodeInfo$AdminStates  (id=6861)
> |blockPoolUsed|215192|
> |cacheCapacity|0|
> |cacheUsed|0|
> |capacity|998164971520|
> |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)|
> |dependentHostNames|LinkedList<E>  (id=6863)|
> |dfsUsed|215192|
> |hostName|"127.0.0.1" (id=6864)|
> |infoPort|64222|
> |infoSecurePort|0|
> |ipAddr|"127.0.0.1" (id=6865)|
> |ipcPort|64223|
> |lastUpdate|1472682790948|
> |lastUpdateMonotonic|209605640|
> |level|0|
> |location|"/default-rack" (id=6866)|
> |maintenanceExpireTimeInMS|0|
> |parent|null|
> |peerHostName|null|
> |remaining|20486512640|
> |softwareVersion|null|
> |upgradeDomain|null|
> |xceiverCount|1|
> |xferAddr|"127.0.0.1:64220" (id=6855)|
> |xferPort|64220|
> [0]StorageReport  (id=6856)
> |blockPoolUsed|4096|
> |capacity|499082485760|
> |dfsUsed|4096|
> |failed|false|
> |remaining|10243256320|
> |storage|DatanodeStorage  (id=6869)|
> [1]StorageReport  (id=6859)
> |blockPoolUsed|211096|
> |capacity|499082485760|
> |dfsUsed|211096|
> |failed|false|
> |remaining|10243256320|
> |storage|DatanodeStorage  (id=6872)|



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to