This is expected for now. The problem is that client configs are run and rendered on the Ambari server itself, which might not even be a part of the cluster. Some properties, such as the ones you listed below, are rendered on a per-host basis, and can be different depending on the versions of the components which are installed.
We believe that the download client configs logic needs to be rewritten to allow you to specify the host on which you want to download them. On Mar 16, 2018, at 5:35 AM, Gonzalo Herreros <[email protected]<mailto:[email protected]>> wrote: In the cluster nodes, it's normal to have hdp.version as a variable in all the configs, which gets resolved at runtime. I think the ambari agents set it to the right value on the start scripts However, it is a good point that if you want to download the config, it's normally because you want to use it on some client external to the node and thus shouldn't need that. Gonzalo On 15 March 2018 at 21:21, Juanjo Marron <[email protected]<mailto:[email protected]>> wrote: Hi all, I am using the Download Service Client Configs feature in Ambari APIs and I realized that in some of the configuration files the ${hdp.version} parameter has not been resolved. This token needs to be replaced by the right hdp version value in order to properly use some of the properties. Is it a bug in Ambari or it is the expected behavior? In case it is expected, how/where is the best way to obtain the value for {hdp.version}? A good example is: mapred-site.xml configuration file form MapReduce2 service where multiple property values downloaded maintain the parameter ${hdp.version} These are two of them: <property> <name>mapreduce.application.classpath</name> <value>$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure:/usr/hdp/current/ext/hadoop/*</value> </property> <property> <name>mapreduce.application.framework.path</name> <value>/hdp/apps/${hdp.version}/mapreduce/mapreduce.tar.gz#mr-framework</value> </property> Also I have seen tokens that get replaced in the UI but not in the configuration file downloaded, for example: The property yarn.nodemanager.aux-services.spark2_shuffle.classpath in advanced yarn-site.xml (YARN service) shows this value in the UI: {{stack_root}}/${hdp.version}/spark2/aux/* While in the client configurations obtained after downloaded <property> <name>yarn.nodemanager.aux-services.spark2_shuffle.classpath</name> <value>/usr/hdp/${hdp.version}/spark2/aux/*</value> </property> {{stack_root}} gets properly replaced but ${hdp.version} remains as a string not resolved How that? Can not the same logic applied to both parameters? I would appreciate some answers and more details on this topic Thanks
