[ 
https://issues.apache.org/jira/browse/HDFS-17063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17775979#comment-17775979
 ] 

ASF GitHub Bot commented on HDFS-17063:
---------------------------------------

tomscut commented on code in PR #5793:
URL: https://github.com/apache/hadoop/pull/5793#discussion_r1361415806


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml:
##########
@@ -414,12 +421,19 @@
   <value>0</value>
   <description>Reserved space in percentage. Read 
dfs.datanode.du.reserved.calculator to see
     when this takes effect. The actual number of bytes reserved will be 
calculated by using the
-    total capacity of the data directory in question. Specific storage type 
based reservation
+    total capacity of the data directory in question. Specific directory based 
reservation is
+    supported.The property can be followed with directory name which is set at 
'dfs.datanode.data.dir'.

Review Comment:
   Add a space after the period. `supported.The property` -> `supported. The 
property`





> Datanode configures different Capacity Reserved for each disk
> -------------------------------------------------------------
>
>                 Key: HDFS-17063
>                 URL: https://issues.apache.org/jira/browse/HDFS-17063
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: datanode, hdfs
>    Affects Versions: 3.3.6
>            Reporter: Jiale Qi
>            Assignee: Jiale Qi
>            Priority: Minor
>              Labels: pull-request-available
>
> Now _dfs.datanode.du.reserved_ takes effect for all directory of a datanode.
> This issue allows cluster administrator to configure 
> {_}dfs.datanode.du.reserved./data/hdfs1/data{_}, which only take effect for a 
> specific directory.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to