[
https://issues.apache.org/jira/browse/AMBARI-19588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15829709#comment-15829709
]
Hadoop QA commented on AMBARI-19588:
------------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12848260/AMBARI-19588.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:red}-1 tests included{color}. The patch doesn't appear to include
any new or modified tests.
Please justify why no new tests are needed for this
patch.
Also please list what manual steps were performed to
verify this patch.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:green}+1 core tests{color}. The patch passed unit tests in
ambari-server.
Test results:
https://builds.apache.org/job/Ambari-trunk-test-patch/10140//testReport/
Console output:
https://builds.apache.org/job/Ambari-trunk-test-patch/10140//console
This message is automatically generated.
> Step7 installer config validation request failed
> ------------------------------------------------
>
> Key: AMBARI-19588
> URL: https://issues.apache.org/jira/browse/AMBARI-19588
> Project: Ambari
> Issue Type: Bug
> Affects Versions: trunk
> Reporter: Dmytro Grinenko
> Assignee: Dmytro Grinenko
> Priority: Critical
> Attachments: AMBARI-19588.patch
>
>
> * try install cluster using trunk-build
> 1. 3 nodes
> 2. HDFS, ZK, AMS, SS
> * Proceed to step7
> * Click "Next"
> Validations request failed with stderr:
> {noformat}
> Traceback (most recent call last):
> File "/var/lib/ambari-server/resources/scripts/stack_advisor.py", line 158,
> in <module>
> main(sys.argv)
> File "/var/lib/ambari-server/resources/scripts/stack_advisor.py", line 115,
> in main
> result = stackAdvisor.validateConfigurations(services, hosts)
> File "/var/lib/ambari-server/resources/scripts/../stacks/stack_advisor.py",
> line 862, in validateConfigurations
> validationItems = self.getConfigurationsValidationItems(services, hosts)
> File "/var/lib/ambari-server/resources/scripts/../stacks/stack_advisor.py",
> line 943, in getConfigurationsValidationItems
>
> items.extend(self.getConfigurationsValidationItemsForService(configurations,
> recommendedDefaults, service, services, hosts))
> File "/var/lib/ambari-server/resources/scripts/../stacks/stack_advisor.py",
> line 967, in getConfigurationsValidationItemsForService
> resultItems = self.validateConfigurationsForSite(configurations,
> recommendedDefaults, services, hosts, siteName, method)
> File "/var/lib/ambari-server/resources/scripts/../stacks/stack_advisor.py",
> line 958, in validateConfigurationsForSite
> return method(siteProperties, siteRecommendations, configurations,
> services, hosts)
> File
> "/var/lib/ambari-server/resources/scripts/./../stacks/HDP/2.0.6/services/stack_advisor.py",
> line 1420, in validateAmsHbaseEnvConfigurations
> requiredMemory = getMemorySizeRequired(hostComponents, configurations)
> NameError: global name 'getMemorySizeRequired' is not defined
> {noformat}
> Stdout:
> {noformat}
> StackAdvisor implementation for stack HDP, version 2.0.6 was loaded
> StackAdvisor implementation for stack HDP, version 2.1 was loaded
> StackAdvisor implementation for stack HDP, version 2.2 was loaded
> StackAdvisor implementation for stack HDP, version 2.3 was loaded
> StackAdvisor implementation for stack HDP, version 2.4 was loaded
> StackAdvisor implementation for stack HDP, version 2.5 was loaded
> StackAdvisor implementation for stack HDP, version 2.6 was loaded
> Returning HDP26StackAdvisor implementation
> max_inmemory_regions: -1.15
> Processing file:
> /var/lib/ambari-server/resources/stacks/HDP/2.6/services/../../../../common-services/AMBARI_METRICS/0.1.0/package/files/service-metrics/HDFS.txt
> Processing file:
> /var/lib/ambari-server/resources/stacks/HDP/2.6/services/../../../../common-services/AMBARI_METRICS/0.1.0/package/files/service-metrics/AMBARI_METRICS.txt
> Processing file:
> /var/lib/ambari-server/resources/stacks/HDP/2.6/services/../../../../common-services/AMBARI_METRICS/0.1.0/package/files/service-metrics/HOST.txt
> metrics length: 133
> 2017-01-11 08:54:20,489 - Calculating Hadoop Proxy User recommendations for
> HDFS service.
> 2017-01-11 08:54:20,489 - Calculating Hadoop Proxy User recommendations for
> YARN service.
> 2017-01-11 08:54:20,489 - Calculating Hadoop Proxy User recommendations for
> HIVE service.
> 2017-01-11 08:54:20,489 - Calculating Hadoop Proxy User recommendations for
> OOZIE service.
> 2017-01-11 08:54:20,489 - Calculating Hadoop Proxy User recommendations for
> FALCON service.
> 2017-01-11 08:54:20,490 - Calculating Hadoop Proxy User recommendations for
> SPARK service.
> 2017-01-11 08:54:20,490 - Updated hadoop.proxyuser.hdfs.hosts as : *
> 2017-01-11 08:54:20,492 - ServiceAdvisor implementation for service
> SMARTSENSE was loaded
> SiteName: ams-env, method: validateAmsEnvConfigurations
> Site properties: {'ambari_metrics_user': 'ams', 'metrics_monitor_log_dir':
> '/var/log/ambari-metrics-monitor', 'metrics_collector_log_dir':
> '/var/log/ambari-metrics-collector', 'metrics_monitor_pid_dir':
> '/var/run/ambari-metrics-monitor', 'metrics_collector_heapsize': '512',
> 'content': '\n# Set environment variables here.\n\n# AMS instance
> name\nexport AMS_INSTANCE_NAME={{hostname}}\n\n# The java implementation to
> use. Java 1.6 required.\nexport JAVA_HOME={{java64_home}}\n\n# Collector Log
> directory for log4j\nexport
> AMS_COLLECTOR_LOG_DIR={{ams_collector_log_dir}}\n\n# Monitor Log directory
> for outfile\nexport AMS_MONITOR_LOG_DIR={{ams_monitor_log_dir}}\n\n#
> Collector pid directory\nexport
> AMS_COLLECTOR_PID_DIR={{ams_collector_pid_dir}}\n\n# Monitor pid
> directory\nexport AMS_MONITOR_PID_DIR={{ams_monitor_pid_dir}}\n\n# AMS HBase
> pid directory\nexport AMS_HBASE_PID_DIR={{hbase_pid_dir}}\n\n# AMS Collector
> heapsize\nexport AMS_COLLECTOR_HEAPSIZE={{metrics_collector_heapsize}}\n\n#
> HBase Tables Initialization check enabled\nexport
> AMS_HBASE_INIT_CHECK_ENABLED={{ams_hbase_init_check_enabled}}\n\n# AMS
> Collector options\nexport
> AMS_COLLECTOR_OPTS="-Djava.library.path=/usr/lib/ams-hbase/lib/hadoop-native"\n{%
> if security_enabled %}\nexport AMS_COLLECTOR_OPTS="$AMS_COLLECTOR_OPTS
> -Djava.security.auth.login.config={{ams_collector_jaas_config_file}}"\n{%
> endif %}\n\n# AMS Collector GC options\nexport
> AMS_COLLECTOR_GC_OPTS="-XX:+UseConcMarkSweepGC -verbose:gc
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps
> -Xloggc:{{ams_collector_log_dir}}/collector-gc.log-`date
> +\'%Y%m%d%H%M\'`"\nexport AMS_COLLECTOR_OPTS="$AMS_COLLECTOR_OPTS
> $AMS_COLLECTOR_GC_OPTS"\n\n# Metrics collector host will be blacklisted for
> specified number of seconds if metric monitor failed to connect to
> it.\nexport
> AMS_FAILOVER_STRATEGY_BLACKLISTED_INTERVAL={{failover_strategy_blacklisted_interval}}',
> 'metrics_collector_pid_dir': '/var/run/ambari-metrics-collector',
> 'timeline.metrics.skip.disk.metrics.patterns': 'true',
> 'failover_strategy_blacklisted_interval': '300'}
> Recommendations: {'metrics_collector_heapsize': '512'}
> SiteName: ams-hbase-env, method: validateAmsHbaseEnvConfigurations
> Site properties: {'hbase_pid_dir': '/var/run/ambari-metrics-collector/',
> 'hbase_classpath_additional': '', 'regionserver_xmn_size': '128',
> 'max_open_files_limit': '32768', 'hbase_master_maxperm_size': '128',
> 'hbase_regionserver_xmn_ratio': '0.2', 'hbase_master_heapsize': '640',
> 'content': '\n# Set environment variables here.\n\n# The java implementation
> to use. Java 1.6+ required.\nexport JAVA_HOME={{java64_home}}\n\n# HBase
> Configuration directory\nexport
> HBASE_CONF_DIR=${HBASE_CONF_DIR:-{{hbase_conf_dir}}}\n\n# Extra Java
> CLASSPATH elements.
> Optional.\nadditional_cp={{hbase_classpath_additional}}\nif [ -n
> "$additional_cp" ];\nthen\n export
> HBASE_CLASSPATH=${HBASE_CLASSPATH}:$additional_cp\nelse\n export
> HBASE_CLASSPATH=${HBASE_CLASSPATH}\nfi\n\n# The maximum amount of heap to use
> for hbase shell.\nexport HBASE_SHELL_OPTS="-Xmx256m"\n\n# Extra Java runtime
> options.\n# Below are what we set by default. May only work with SUN JVM.\n#
> For more on why as well as other possible settings,\n# see
> http://wiki.apache.org/hadoop/PerformanceTuning\nexport
> HBASE_OPTS="-XX:+UseConcMarkSweepGC
> -XX:ErrorFile={{hbase_log_dir}}/hs_err_pid%p.log
> -Djava.io.tmpdir={{hbase_tmp_dir}}"\nexport SERVER_GC_OPTS="-verbose:gc
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps
> -Xloggc:{{hbase_log_dir}}/gc.log-`date +\'%Y%m%d%H%M\'`"\n# Uncomment below
> to enable java garbage collection logging.\n# export HBASE_OPTS="$HBASE_OPTS
> -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps
> -Xloggc:$HBASE_HOME/logs/gc-hbase.log"\n\n# Uncomment and adjust to enable
> JMX exporting\n# See jmxremote.password and jmxremote.access in
> $JRE_HOME/lib/management to configure remote password access.\n# More details
> at:
> http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html\n#\n#
> export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false
> -Dcom.sun.management.jmxremote.authenticate=false"\n\n{% if java_version < 8
> %}\nexport HBASE_MASTER_OPTS=" -XX:PermSize=64m
> -XX:MaxPermSize={{hbase_master_maxperm_size}} -Xms{{hbase_heapsize}}
> -Xmx{{hbase_heapsize}} -Xmn{{hbase_master_xmn_size}}
> -XX:CMSInitiatingOccupancyFraction=70
> -XX:+UseCMSInitiatingOccupancyOnly"\nexport
> HBASE_REGIONSERVER_OPTS="-XX:MaxPermSize=128m -Xmn{{regionserver_xmn_size}}
> -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly
> -Xms{{regionserver_heapsize}} -Xmx{{regionserver_heapsize}}"\n{% else
> %}\nexport HBASE_MASTER_OPTS=" -Xms{{hbase_heapsize}} -Xmx{{hbase_heapsize}}
> -Xmn{{hbase_master_xmn_size}} -XX:CMSInitiatingOccupancyFraction=70
> -XX:+UseCMSInitiatingOccupancyOnly"\nexport HBASE_REGIONSERVER_OPTS="
> -Xmn{{regionserver_xmn_size}} -XX:CMSInitiatingOccupancyFraction=70
> -XX:+UseCMSInitiatingOccupancyOnly -Xms{{regionserver_heapsize}}
> -Xmx{{regionserver_heapsize}}"\n{% endif %}\n\n\n# export
> HBASE_THRIFT_OPTS="$HBASE_JMX_BASE
> -Dcom.sun.management.jmxremote.port=10103"\n# export
> HBASE_ZOOKEEPER_OPTS="$HBASE_JMX_BASE
> -Dcom.sun.management.jmxremote.port=10104"\n\n# File naming hosts on which
> HRegionServers will run. $HBASE_HOME/conf/regionservers by default.\nexport
> HBASE_REGIONSERVERS=${HBASE_CONF_DIR}/regionservers\n\n# Extra ssh options.
> Empty by default.\n# export HBASE_SSH_OPTS="-o ConnectTimeout=1 -o
> SendEnv=HBASE_CONF_DIR"\n\n# Where log files are stored. $HBASE_HOME/logs by
> default.\nexport HBASE_LOG_DIR={{hbase_log_dir}}\n\n# A string representing
> this instance of hbase. $USER by default.\n# export
> HBASE_IDENT_STRING=$USER\n\n# The scheduling priority for daemon processes.
> See \'man nice\'.\n# export HBASE_NICENESS=10\n\n# The directory where pid
> files are stored. /tmp by default.\nexport
> HBASE_PID_DIR={{hbase_pid_dir}}\n\n# Seconds to sleep between slave commands.
> Unset by default. This\n# can be useful in large clusters, where, e.g., slave
> rsyncs can\n# otherwise arrive faster than the master can service them.\n#
> export HBASE_SLAVE_SLEEP=0.1\n\n# Tell HBase whether it should manage it\'s
> own instance of Zookeeper or not.\nexport HBASE_MANAGES_ZK=false\n\n{% if
> security_enabled %}\nexport HBASE_OPTS="$HBASE_OPTS
> -Djava.security.auth.login.config={{client_jaas_config_file}}"\nexport
> HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS
> -Djava.security.auth.login.config={{master_jaas_config_file}}"\nexport
> HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS
> -Djava.security.auth.login.config={{regionserver_jaas_config_file}}"\nexport
> HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS
> -Djava.security.auth.login.config={{ams_zookeeper_jaas_config_file}}"\n{%
> endif %}\n\n# use embedded native
> libs\n_HADOOP_NATIVE_LIB="/usr/lib/ams-hbase/lib/hadoop-native/"\nexport
> HBASE_OPTS="$HBASE_OPTS -Djava.library.path=${_HADOOP_NATIVE_LIB}"\n\n# Unset
> HADOOP_HOME to avoid importing HADOOP installed cluster related configs like:
> /usr/hdp/2.2.0.0-2041/hadoop/conf/\nexport
> HADOOP_HOME={{ams_hbase_home_dir}}\n\n# Explicitly Setting HBASE_HOME for AMS
> HBase so that there is no conflict\nexport
> HBASE_HOME={{ams_hbase_home_dir}}', 'hbase_regionserver_shutdown_timeout':
> '30', 'hbase_regionserver_heapsize': '768', 'hbase_log_dir':
> '/var/log/ambari-metrics-collector', 'hbase_master_xmn_size': '192'}
> Recommendations: {'hbase_master_heapsize': '640',
> 'hbase_regionserver_heapsize': '768', 'hbase_log_dir':
> '/var/log/ambari-metrics-collector', 'hbase_master_xmn_size': '192'}
> Error occured in stack advisor.
> Error details: global name 'getMemorySizeRequired' is not defined
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)