Myroslav Papirkovskyy created AMBARI-10837:
----------------------------------------------
Summary: HDFS Review: Multiple recommendation API updates for HDFS
configs
Key: AMBARI-10837
URL: https://issues.apache.org/jira/browse/AMBARI-10837
Project: Ambari
Issue Type: Bug
Components: ambari-server
Affects Versions: 2.1.0
Reporter: Myroslav Papirkovskyy
Assignee: Myroslav Papirkovskyy
Priority: Critical
Fix For: 2.1.0
HDFS configs review was done and the configs spreadsheet has been updated with
various changes where the following must be fixed.
* Below configs are to be marked as {{depends_on}} {{namenode_heapsize}}, and
their value should be derived from it (basically ignore document value for
below configs). Whenever {{namenode_heapsize}} changes in UI, below config
values should also be updated
** namenode_opt_newsize (hadoop-env.sh) = {{namenode_heapsize/8}}
** namenode_opt_maxnewsize (hadoop-env.sh) = {{namenode_heapsize/8}}
* {{dfs.namenode.safemode.threshold-pct}}
** minimum = 0.990f
** maximum = 1.000f
** default = 0.999f
** increment-step = 0.001f
* {{dfs.datanode.failed.volumes.tolerated}} should be {{depends_on}}
{{dfs.datanode.data.dir}}. So if a user adds additional folder in
{{dfs.datanode.data.dir}}, then the *value and maximum* of
{{dfs.datanode.failed.volumes.tolerated}} should change accordingly.
* {{namenode_heapsize}} calculations should take into account host memory
limits. Namenode_heapsize should be the {{host-memory - os-reserverd-memory}}.
Also, if there are any other master-components on the same host, then it should
be halved (namenode_heapsize/2).
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)