[
https://issues.apache.org/jira/browse/HDFS-8610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14587964#comment-14587964
]
Ajith S commented on HDFS-8610:
-------------------------------
Hi [~brahmareddy]
Yes you are right. Please refer HDFS-8574 when very high number of files with
small size are present we encounter {{InvalidProtocolBufferException}}. So to
avoid this, the data.dir was split into subfolder so that the block report size
will be smaller. Looks like we have a dead end for this scenario
> if set several dirs which belongs to one disk in "dfs.datanode.data.dir", NN
> calculate capacity wrong
> -----------------------------------------------------------------------------------------------------
>
> Key: HDFS-8610
> URL: https://issues.apache.org/jira/browse/HDFS-8610
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: HDFS
> Affects Versions: 2.7.0
> Reporter: tongshiquan
> Assignee: Ajith S
> Priority: Minor
>
> In my machine, disk info as below:
> /dev/sdc1 8.1T 2.0T 5.7T 27% /export2
> /dev/sdd1 8.1T 2.0T 5.7T 27% /export3
> /dev/sde1 8.1T 2.8T 5.0T 36% /export4
> then set "dfs.datanode.data.dir" as below, each disk have 10 dirs:
> /export2/BigData/hadoop/data/dn,/export2/BigData/hadoop/data/dn1,/export2/BigData/hadoop/data/dn2,/export2/BigData/hadoop/data/dn3,/export2/BigData/hadoop/data/dn4,/export2/BigData/hadoop/data/dn5,/export2/BigData/hadoop/data/dn6,/export2/BigData/hadoop/data/dn7,/export2/BigData/hadoop/data/dn8,/export2/BigData/hadoop/data/dn9,/export2/BigData/hadoop/data/dn10,/export3/BigData/hadoop/data/dn,/export3/BigData/hadoop/data/dn1,/export3/BigData/hadoop/data/dn2,/export3/BigData/hadoop/data/dn3,/export3/BigData/hadoop/data/dn4,/export3/BigData/hadoop/data/dn5,/export3/BigData/hadoop/data/dn6,/export3/BigData/hadoop/data/dn7,/export3/BigData/hadoop/data/dn8,/export3/BigData/hadoop/data/dn9,/export3/BigData/hadoop/data/dn10,/export4/BigData/hadoop/data/dn,/export4/BigData/hadoop/data/dn1,/export4/BigData/hadoop/data/dn2,/export4/BigData/hadoop/data/dn3,/export4/BigData/hadoop/data/dn4,/export4/BigData/hadoop/data/dn5,/export4/BigData/hadoop/data/dn6,/export4/BigData/hadoop/data/dn7,/export4/BigData/hadoop/data/dn8,/export4/BigData/hadoop/data/dn9,/export4/BigData/hadoop/data/dn10
> then NN will think in this DN have 8.1T * 30 = 243 TB, but actually it only
> have 24.3TB
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)