[
https://issues.apache.org/jira/browse/AMBARI-15238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15173103#comment-15173103
]
Siddharth Wagle edited comment on AMBARI-15238 at 3/1/16 2:08 AM:
------------------------------------------------------------------
Pushed to 2.2 and trunk after fix typo.
was (Author: swagle):
Pushed to 2.2 and trunk.
> Deploying AMS datasource and default dashboards sometimes fails during
> cluster install
> --------------------------------------------------------------------------------------
>
> Key: AMBARI-15238
> URL: https://issues.apache.org/jira/browse/AMBARI-15238
> Project: Ambari
> Issue Type: Bug
> Components: ambari-metrics
> Affects Versions: 2.2.2
> Reporter: Siddharth Wagle
> Assignee: Siddharth Wagle
> Fix For: 2.2.2
>
> Attachments: AMBARI-15238.patch
>
>
> Exception trace:
> {code}
> stderr:
> Traceback (most recent call last):
> File
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana.py",
> line 70, in <module>
> AmsGrafana().execute()
> File
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
> line 219, in execute
> method(env)
> File
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana.py",
> line 51, in start
> create_ams_datasource()
> File
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana_util.py",
> line 143, in create_ams_datasource
> response = perform_grafana_get_call(GRAFANA_DATASOURCE_URL, server)
> File
> "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana_util.py",
> line 50, in perform_grafana_get_call
> conn.request("GET", url)
> File "/usr/lib64/python2.6/httplib.py", line 936, in request
> self._send_request(method, url, body, headers)
> File "/usr/lib64/python2.6/httplib.py", line 973, in _send_request
> self.endheaders()
> File "/usr/lib64/python2.6/httplib.py", line 930, in endheaders
> self._send_output()
> File "/usr/lib64/python2.6/httplib.py", line 802, in _send_output
> self.send(msg)
> File "/usr/lib64/python2.6/httplib.py", line 761, in send
> self.connect()
> File "/usr/lib64/python2.6/httplib.py", line 742, in connect
> self.timeout)
> File "/usr/lib64/python2.6/socket.py", line 567, in create_connection
> raise error, msg
> socket.error: [Errno 111] Connection refused
> stdout:
> 2016-02-29 18:49:46,185 - The hadoop conf dir
> /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for
> version 2.4.0.0-169
> 2016-02-29 18:49:46,185 - Checking if need to create versioned conf dir
> /etc/hadoop/2.4.0.0-169/0
> 2016-02-29 18:49:46,186 - call['conf-select create-conf-dir --package hadoop
> --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo':
> True, 'quiet': False, 'stderr': -1}
> 2016-02-29 18:49:46,239 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist
> already', '')
> 2016-02-29 18:49:46,240 - checked_call['conf-select set-conf-dir --package
> hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False,
> 'sudo': True, 'quiet': False}
> 2016-02-29 18:49:46,296 - checked_call returned (0,
> '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
> 2016-02-29 18:49:46,296 - Ensuring that hadoop has the correct symlink
> structure
> 2016-02-29 18:49:46,296 - Using hadoop conf dir:
> /usr/hdp/current/hadoop-client/conf
> 2016-02-29 18:49:46,488 - The hadoop conf dir
> /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for
> version 2.4.0.0-169
> 2016-02-29 18:49:46,488 - Checking if need to create versioned conf dir
> /etc/hadoop/2.4.0.0-169/0
> 2016-02-29 18:49:46,488 - call['conf-select create-conf-dir --package hadoop
> --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo':
> True, 'quiet': False, 'stderr': -1}
> 2016-02-29 18:49:46,514 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist
> already', '')
> 2016-02-29 18:49:46,515 - checked_call['conf-select set-conf-dir --package
> hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False,
> 'sudo': True, 'quiet': False}
> 2016-02-29 18:49:46,552 - checked_call returned (0,
> '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
> 2016-02-29 18:49:46,553 - Ensuring that hadoop has the correct symlink
> structure
> 2016-02-29 18:49:46,553 - Using hadoop conf dir:
> /usr/hdp/current/hadoop-client/conf
> 2016-02-29 18:49:46,554 - Group['hadoop'] {}
> 2016-02-29 18:49:46,556 - Group['users'] {}
> 2016-02-29 18:49:46,556 - User['zookeeper'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2016-02-29 18:49:46,557 - User['ams'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2016-02-29 18:49:46,558 - User['ambari-qa'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['users']}
> 2016-02-29 18:49:46,558 - User['tez'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['users']}
> 2016-02-29 18:49:46,559 - User['hdfs'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2016-02-29 18:49:46,560 - User['yarn'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2016-02-29 18:49:46,560 - User['mapred'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2016-02-29 18:49:46,561 - User['hbase'] {'gid': 'hadoop',
> 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
> 2016-02-29 18:49:46,561 - File['/var/lib/ambari-agent/tmp/changeUid.sh']
> {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
> 2016-02-29 18:49:46,563 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh
> ambari-qa
> /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']
> {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
> 2016-02-29 18:49:46,585 - Skipping
> Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa
> /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']
> due to not_if
> 2016-02-29 18:49:46,586 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase',
> 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
> 2016-02-29 18:49:46,587 - File['/var/lib/ambari-agent/tmp/changeUid.sh']
> {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
> 2016-02-29 18:49:46,588 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh
> hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase']
> {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
> 2016-02-29 18:49:46,593 - Skipping
> Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase
> /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due
> to not_if
> 2016-02-29 18:49:46,593 - Group['hdfs'] {}
> 2016-02-29 18:49:46,593 - User['hdfs'] {'fetch_nonlocal_groups': True,
> 'groups': ['hadoop', 'hdfs']}
> 2016-02-29 18:49:46,594 - Directory['/etc/hadoop'] {'mode': 0755}
> 2016-02-29 18:49:46,612 -
> File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content':
> InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
> 2016-02-29 18:49:46,613 -
> Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner':
> 'hdfs', 'group': 'hadoop', 'mode': 0777}
> 2016-02-29 18:49:46,630 - Execute[('setenforce', '0')] {'not_if': '(! which
> getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo':
> True, 'only_if': 'test -f /selinux/enforce'}
> 2016-02-29 18:49:46,645 - Directory['/var/log/hadoop'] {'owner': 'root',
> 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
> 2016-02-29 18:49:46,646 - Directory['/var/run/hadoop'] {'owner': 'root',
> 'group': 'root', 'recursive': True, 'cd_access': 'a'}
> 2016-02-29 18:49:46,647 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs',
> 'recursive': True, 'cd_access': 'a'}
> 2016-02-29 18:49:46,652 -
> File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties']
> {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
> 2016-02-29 18:49:46,654 -
> File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content':
> Template('health_check.j2'), 'owner': 'hdfs'}
> 2016-02-29 18:49:46,655 -
> File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ...,
> 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
> 2016-02-29 18:49:46,669 -
> File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties']
> {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs',
> 'group': 'hadoop'}
> 2016-02-29 18:49:46,670 -
> File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content':
> StaticFile('task-log4j.properties'), 'mode': 0755}
> 2016-02-29 18:49:46,671 -
> File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner':
> 'hdfs', 'group': 'hadoop'}
> 2016-02-29 18:49:46,676 - File['/etc/hadoop/conf/topology_mappings.data']
> {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'),
> 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
> 2016-02-29 18:49:46,680 - File['/etc/hadoop/conf/topology_script.py']
> {'content': StaticFile('topology_script.py'), 'only_if': 'test -d
> /etc/hadoop/conf', 'mode': 0755}
> 2016-02-29 18:49:46,859 - The hadoop conf dir
> /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for
> version 2.4.0.0-169
> 2016-02-29 18:49:46,859 - Checking if need to create versioned conf dir
> /etc/hadoop/2.4.0.0-169/0
> 2016-02-29 18:49:46,860 - call['conf-select create-conf-dir --package hadoop
> --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo':
> True, 'quiet': False, 'stderr': -1}
> 2016-02-29 18:49:46,894 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist
> already', '')
> 2016-02-29 18:49:46,894 - checked_call['conf-select set-conf-dir --package
> hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False,
> 'sudo': True, 'quiet': False}
> 2016-02-29 18:49:46,921 - checked_call returned (0,
> '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
> 2016-02-29 18:49:46,921 - Ensuring that hadoop has the correct symlink
> structure
> 2016-02-29 18:49:46,921 - Using hadoop conf dir:
> /usr/hdp/current/hadoop-client/conf
> 2016-02-29 18:49:46,962 - Directory['/etc/ambari-metrics-grafana/conf']
> {'owner': 'ams', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
> 2016-02-29 18:49:46,965 - Changing owner for /etc/ambari-metrics-grafana/conf
> from 0 to ams
> 2016-02-29 18:49:46,965 - Changing group for /etc/ambari-metrics-grafana/conf
> from 0 to hadoop
> 2016-02-29 18:49:46,966 - Directory['/var/log/ambari-metrics-grafana']
> {'owner': 'ams', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
> 2016-02-29 18:49:46,966 - Changing owner for /var/log/ambari-metrics-grafana
> from 0 to ams
> 2016-02-29 18:49:46,966 - Changing group for /var/log/ambari-metrics-grafana
> from 0 to hadoop
> 2016-02-29 18:49:46,967 - Directory['/var/lib/ambari-metrics-grafana']
> {'owner': 'ams', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
> 2016-02-29 18:49:46,967 - Changing owner for /var/lib/ambari-metrics-grafana
> from 0 to ams
> 2016-02-29 18:49:46,967 - Changing group for /var/lib/ambari-metrics-grafana
> from 0 to hadoop
> 2016-02-29 18:49:46,967 - Directory['/var/run/ambari-metrics-grafana']
> {'owner': 'ams', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
> 2016-02-29 18:49:46,967 - Changing owner for /var/run/ambari-metrics-grafana
> from 0 to ams
> 2016-02-29 18:49:46,968 - Changing group for /var/run/ambari-metrics-grafana
> from 0 to hadoop
> 2016-02-29 18:49:46,972 -
> File['/etc/ambari-metrics-grafana/conf/ams-grafana-env.sh'] {'content':
> InlineTemplate(...), 'owner': 'ams', 'group': 'hadoop'}
> 2016-02-29 18:49:46,973 - Writing
> File['/etc/ambari-metrics-grafana/conf/ams-grafana-env.sh'] because contents
> don't match
> 2016-02-29 18:49:46,973 - Changing owner for
> /etc/ambari-metrics-grafana/conf/ams-grafana-env.sh from 0 to ams
> 2016-02-29 18:49:46,974 - Changing group for
> /etc/ambari-metrics-grafana/conf/ams-grafana-env.sh from 0 to hadoop
> 2016-02-29 18:49:46,979 -
> File['/etc/ambari-metrics-grafana/conf/ams-grafana.ini'] {'content':
> InlineTemplate(...), 'owner': 'ams', 'group': 'hadoop'}
> 2016-02-29 18:49:46,980 - Writing
> File['/etc/ambari-metrics-grafana/conf/ams-grafana.ini'] because contents
> don't match
> 2016-02-29 18:49:46,980 - Changing owner for
> /etc/ambari-metrics-grafana/conf/ams-grafana.ini from 0 to ams
> 2016-02-29 18:49:46,980 - Changing group for
> /etc/ambari-metrics-grafana/conf/ams-grafana.ini from 0 to hadoop
> 2016-02-29 18:49:46,981 - Execute['/usr/sbin/ambari-metrics-grafana stop']
> {'user': 'ams'}
> 2016-02-29 18:49:47,020 - Execute['/usr/sbin/ambari-metrics-grafana start']
> {'user': 'ams'}
> 2016-02-29 18:49:48,215 - Checking if AMS Grafana datasource already exists
> 2016-02-29 18:49:48,215 - Connecting (GET) to
> ygraf-1.c.pramod-thangali.internal:3000/api/datasources
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)