[ 
https://issues.apache.org/jira/browse/AMBARI-24302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16548355#comment-16548355
 ] 

Dmitry Lysnichenko commented on AMBARI-24302:
---------------------------------------------

Looks like the reason is that code at default hadoop-env config

export HADOOP_OPTS="-Dhdp.version=$HDP_VERSION $HADOOP_OPTS"
But we never export HDP_VERSION variable from python scripts

> -Dhdp.version shows blank value in process output for Datanodes
> ---------------------------------------------------------------
>
>                 Key: AMBARI-24302
>                 URL: https://issues.apache.org/jira/browse/AMBARI-24302
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-server
>    Affects Versions: 2.6.2
>            Reporter: Dmitry Lysnichenko
>            Assignee: Dmitry Lysnichenko
>            Priority: Blocker
>             Fix For: 2.7.1
>
>
> # When we check the output of {{ps -ef | grep SecureDataNodeStarter}} it 
> shows multiple instances of {{-Dhdp.version}} being blank/empty and some 
> having the right value as shown below:
> {quote}hdfs     40829 40798  1 14:11 ?        00:00:18 jsvc.exec 
> -Dproc_datanode -outfile /hdplogs/hadoop/hdfs/jsvc.out -errfile 
> /hdplogs/hadoop/hdfs/jsvc.err -pidfile 
> /var/run/hadoop/hdfs/hadoop_secure_dn.pid -nodetach -user hdfs -cp 
> /usr/hdp/current/hadoop-client/conf:/usr/hdp/2.6.3.0-235/hadoop/lib/*:/usr/hdp/2.6.3.0-235/hadoop/.//*:/usr/hdp/2.6.3.0-235/hadoop-hdfs/./:/usr/hdp/2.6.3.0-235/hadoop-hdfs/lib/*:/usr/hdp/2.6.3.0-235/hadoop-hdfs/.//*:/usr/hdp/2.6.3.0-235/hadoop-yarn/lib/*:/usr/hdp/2.6.3.0-235/hadoop-yarn/.//*:/usr/hdp/2.6.3.0-235/hadoop-mapreduce/lib/*:/usr/hdp/2.6.3.0-235/hadoop-mapreduce/.//*
>  -Xmx1024m {color:#14892c}*-Dhdp.version=2.6.3.0-235*{color} 
> -Djava.net.preferIPv4Stack=true {color:#d04437}-Dhdp.version= 
> {color}-Djava.net.preferIPv4Stack=true {color:#d04437}-Dhdp.version= 
> {color}-Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hdplogs/hadoop/ 
> -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.6.3.0-235/hadoop 
> -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console 
> -Djava.library.path=:/usr/hdp/2.6.3.0-235/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.3.0-235/hadoop/lib/native
>  -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true 
> {color:#14892c}*-Dhdp.version=2.6.3.0-235*{color} 
> -Dhadoop.log.dir=/hdplogs/hadoop/ 
> -Dhadoop.log.file=hadoop-hdfs-datanode-guedlpa12nf01.devfg.rbc.com.log 
> -Dhadoop.home.dir=/usr/hdp/2.6.3.0-235/hadoop -Dhadoop.id.str= 
> -Dhadoop.root.logger=INFO,RFA 
> -Djava.library.path=:/usr/hdp/2.6.3.0-235/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.3.0-235/hadoop/lib/native:/usr/hdp/2.6.3.0-235/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.6.3.0-235/hadoop/lib/native
>  -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true 
> -Dhadoop.log.dir=/hdplogs/hadoop/hdfs -Dhadoop.id.str=hdfs -jvm server 
> -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC 
> -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly 
> -XX:ErrorFile=/hdplogs/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m 
> -XX:MaxNewSize=200m -Xloggc:/hdplogs/hadoop/hdfs/gc.log-201807051411 
> -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps 
> -Xms1024m -Xmx1024m -Dhadoop.security.logger=INFO,DRFAS 
> -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 
> -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 
> -XX:+UseCMSInitiatingOccupancyOnly 
> -XX:ErrorFile=/hdplogs/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m 
> -XX:MaxNewSize=200m -Xloggc:/hdplogs/hadoop/hdfs/gc.log-201807051411 
> -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps 
> -Xms1024m -Xmx1024m -Dhadoop.security.logger=INFO,DRFAS 
> -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 
> -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 
> -XX:+UseCMSInitiatingOccupancyOnly 
> -XX:ErrorFile=/hdplogs/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m 
> -XX:MaxNewSize=200m -Xloggc:/hdplogs/hadoop/hdfs/gc.log-201807051411 
> -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps 
> -Xms1024m -Xmx1024m -Dhadoop.security.logger=INFO,DRFAS 
> -Dhdfs.audit.logger=INFO,DRFAAUDIT -Dhadoop.security.logger=INFO,RFAS 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter{quote}
> A few params have been repeated multiple times as seen above, example 
> {{-Dhdp.version}} and {{-Djava.net.preferIPv4Stack}}
> It will be great to fix this so it appears consistent.
> As a temporary workaround, I was able to bypass this issue by updating 
> hadoop-env template from Ambari > HDFS > Configs > Advanced > Advanced 
> hadoop-env
> Old Value: {{export HADOOP_OPTS="-Dhdp.version=$HDP_VERSION $HADOOP_OPTS"}}
> Updated Value: {{export HADOOP_OPTS="-Dhdp.version=`hdp-select --version` 
> $HADOOP_OPTS"}}
> P.S. I also checked this on a HDP-2.6.2 cluster and this problem was not seen 
> there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to