Siddharth Wagle created AMBARI-15342:
----------------------------------------
Summary: AMS Grafana start failed with permission denied error on
chanigng user
Key: AMBARI-15342
URL: https://issues.apache.org/jira/browse/AMBARI-15342
Project: Ambari
Issue Type: Bug
Components: ambari-metrics
Affects Versions: 2.2.2
Reporter: Siddharth Wagle
Assignee: Siddharth Wagle
Fix For: 2.2.2
Grafana service failed to start when tried to start from the Ambari UI [from
Ambari Metrics service page, select drop down "Start"].
Failure message from logs : */var/log/ambari-metrics-grafana/grafana.out:
Permission denied\nFAILED"*
Complete logs:
{noformat}
{
"href" :
"http://172.22.110.160:8080/api/v1/clusters/cl1/requests/4/tasks/181",
"Tasks" : {
"attempt_cnt" : 1,
"cluster_name" : "cl1",
"command" : "START",
"command_detail" : "METRICS_GRAFANA START",
"end_time" : 1457416332931,
"error_log" : "/var/lib/ambari-agent/data/errors-181.txt",
"exit_code" : 1,
"host_name" : "os-r6-dggzcu-ambari-rare-19-5.novalocal",
"id" : 181,
"output_log" : "/var/lib/ambari-agent/data/output-181.txt",
"request_id" : 4,
"role" : "METRICS_GRAFANA",
"stage_id" : 4,
"start_time" : 1457416296331,
"status" : "FAILED",
"stderr" : "Traceback (most recent call last):\n File
\"/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana.py\",
line 70, in <module>\n AmsGrafana().execute()\n File
\"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\",
line 219, in execute\n method(env)\n File
\"/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana.py\",
line 48, in start\n user=params.ams_user\n File
\"/usr/lib/python2.6/site-packages/resource_management/core/base.py\", line
154, in __init__\n self.env.run()\n File
\"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\",
line 158, in run\n self.run_action(resource, action)\n File
\"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\",
line 121, in run_action\n provider_action()\n File
\"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py\",
line 238, in action_run\n tries=self.resource.tries,
try_sleep=self.resource.try_sleep)\n File
\"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line
70, in inner\n result = function(command, **kwargs)\n File
\"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line
92, in checked_call\n tries=tries, try_sleep=try_sleep)\n File
\"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line
140, in _call_wrapper\n result = _call(command, **kwargs_copy)\n File
\"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line
291, in _call\n raise
Fail(err_msg)\nresource_management.core.exceptions.Fail: Execution of
'/usr/sbin/ambari-metrics-grafana start' returned 1. ######## Hortonworks
#############\nThis is MOTD message, added for testing in qe infra\nStarting
Ambari Metrics Grafana: .... /usr/sbin/ambari-metrics-grafana: line 114:
/var/log/ambari-metrics-grafana/grafana.out: Permission denied\nFAILED",
"stdout" : "2016-03-08 05:51:46,460 - The hadoop conf dir
/usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for
version 2.4.0.0-169\n2016-03-08 05:51:46,460 - Checking if need to create
versioned conf dir /etc/hadoop/2.4.0.0-169/0\n2016-03-08 05:51:46,467 -
call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169
--conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr':
-1}\n2016-03-08 05:51:46,602 - call returned (1, '/etc/hadoop/2.4.0.0-169/0
exist already', '')\n2016-03-08 05:51:46,602 - checked_call['conf-select
set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0']
{'logoutput': False, 'sudo': True, 'quiet': False}\n2016-03-08 05:51:46,788 -
checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf ->
/etc/hadoop/2.4.0.0-169/0')\n2016-03-08 05:51:46,788 - Ensuring that hadoop has
the correct symlink structure\n2016-03-08 05:51:46,789 - Using hadoop conf dir:
/usr/hdp/current/hadoop-client/conf\n2016-03-08 05:51:47,321 - The hadoop conf
dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for
version 2.4.0.0-169\n2016-03-08 05:51:47,322 - Checking if need to create
versioned conf dir /etc/hadoop/2.4.0.0-169/0\n2016-03-08 05:51:47,322 -
call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169
--conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr':
-1}\n2016-03-08 05:51:47,386 - call returned (1, '/etc/hadoop/2.4.0.0-169/0
exist already', '')\n2016-03-08 05:51:47,387 - checked_call['conf-select
set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0']
{'logoutput': False, 'sudo': True, 'quiet': False}\n2016-03-08 05:51:47,439 -
checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf ->
/etc/hadoop/2.4.0.0-169/0')\n2016-03-08 05:51:47,440 - Ensuring that hadoop has
the correct symlink structure\n2016-03-08 05:51:47,440 - Using hadoop conf dir:
/usr/hdp/current/hadoop-client/conf\n2016-03-08 05:51:47,442 -
Group['cstm-knox-group'] {}\n2016-03-08 05:51:47,448 - Group['hadoop']
{}\n2016-03-08 05:51:47,449 - Group['users'] {}\n2016-03-08 05:51:47,449 -
Group['cstm-spark'] {}\n2016-03-08 05:51:47,449 - User['atlas'] {'gid':
'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}\n2016-03-08
05:51:47,451 - User['cstm-hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups':
True, 'groups': ['hadoop']}\n2016-03-08 05:51:47,453 - User['cstm-sqoop']
{'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups':
['hadoop']}\n2016-03-08 05:51:47,454 - User['cstm-ams'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop']}\n2016-03-08 05:51:47,455 -
User['cstm-tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups':
['users']}\n2016-03-08 05:51:47,456 - User['cstm-storm'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop']}\n2016-03-08 05:51:47,457 -
User['cstm-knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups':
['hadoop']}\n2016-03-08 05:51:47,458 - User['cstm-flume'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop']}\n2016-03-08 05:51:47,459 -
User['cstm-kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups':
['hadoop']}\n2016-03-08 05:51:47,460 - User['cstm-hcat'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop']}\n2016-03-08 05:51:47,462 -
User['cstm-mahout'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups':
['hadoop']}\n2016-03-08 05:51:47,463 - User['cstm-hbase'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop']}\n2016-03-08 05:51:47,464 -
User['cstm-hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups':
['hadoop']}\n2016-03-08 05:51:47,465 - User['cstm-falcon'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['users']}\n2016-03-08 05:51:47,466 -
User['cstm-accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']}\n2016-03-08 05:51:47,467 - User['ambari-qa'] {'gid':
'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}\n2016-03-08
05:51:47,468 - User['cstm-zookeeper'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop']}\n2016-03-08 05:51:47,470 -
User['cstm-oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups':
['users']}\n2016-03-08 05:51:47,471 - User['yarn'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop']}\n2016-03-08 05:51:47,472 -
User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups':
['hadoop']}\n2016-03-08 05:51:47,473 - User['cstm-spark'] {'gid': 'hadoop',
'fetch_nonlocal_groups': True, 'groups': ['hadoop']}\n2016-03-08 05:51:47,474 -
User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups':
['hadoop']}\n2016-03-08 05:51:47,475 -
File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content':
StaticFile('changeToSecureUid.sh'), 'mode': 0555}\n2016-03-08 05:51:47,690 -
Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa
/tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']
{'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}\n2016-03-08
05:51:47,715 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh
ambari-qa
/tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']
due to not_if\n2016-03-08 05:51:47,716 - Directory['/tmp/hbase-hbase']
{'owner': 'cstm-hbase', 'recursive': True, 'mode': 0775, 'cd_access':
'a'}\n2016-03-08 05:51:48,114 - File['/var/lib/ambari-agent/tmp/changeUid.sh']
{'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}\n2016-03-08
05:51:48,416 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh cstm-hbase
/home/cstm-hbase,/tmp/cstm-hbase,/usr/bin/cstm-hbase,/var/log/cstm-hbase,/tmp/hbase-hbase']
{'not_if': '(test $(id -u cstm-hbase) -gt 1000) || (false)'}\n2016-03-08
05:51:48,427 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh
cstm-hbase
/home/cstm-hbase,/tmp/cstm-hbase,/usr/bin/cstm-hbase,/var/log/cstm-hbase,/tmp/hbase-hbase']
due to not_if\n2016-03-08 05:51:48,428 - Group['cstm-hdfs'] {}\n2016-03-08
05:51:48,428 - User['cstm-hdfs'] {'fetch_nonlocal_groups': True, 'groups':
['hadoop', 'cstm-hdfs']}\n2016-03-08 05:51:48,429 - Directory['/etc/hadoop']
{'mode': 0755}\n2016-03-08 05:51:48,577 -
File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content':
InlineTemplate(...), 'owner': 'cstm-hdfs', 'group': 'hadoop'}\n2016-03-08
05:51:48,683 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir']
{'owner': 'cstm-hdfs', 'group': 'hadoop', 'mode': 0777}\n2016-03-08
05:51:48,895 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce )
|| (which getenforce && getenforce | grep -q Disabled)', 'sudo': True,
'only_if': 'test -f /selinux/enforce'}\n2016-03-08 05:51:49,041 -
Directory['/grid/0/log/hdfs'] {'owner': 'root', 'mode': 0775, 'group':
'hadoop', 'recursive': True, 'cd_access': 'a'}\n2016-03-08 05:51:49,639 -
Directory['/grid/0/pid/hdfs'] {'owner': 'root', 'group': 'root', 'recursive':
True, 'cd_access': 'a'}\n2016-03-08 05:51:49,974 -
Directory['/tmp/hadoop-cstm-hdfs'] {'owner': 'cstm-hdfs', 'recursive': True,
'cd_access': 'a'}\n2016-03-08 05:51:50,119 -
File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties']
{'content': Template('commons-logging.properties.j2'), 'owner':
'cstm-hdfs'}\n2016-03-08 05:51:50,225 -
File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content':
Template('health_check.j2'), 'owner': 'cstm-hdfs'}\n2016-03-08 05:51:50,334 -
File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ...,
'owner': 'cstm-hdfs', 'group': 'hadoop', 'mode': 0644}\n2016-03-08 05:51:50,504
- File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties']
{'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'cstm-hdfs',
'group': 'hadoop'}\n2016-03-08 05:51:50,635 -
File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content':
StaticFile('task-log4j.properties'), 'mode': 0755}\n2016-03-08 05:51:50,766 -
File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner':
'cstm-hdfs', 'group': 'hadoop'}\n2016-03-08 05:51:50,874 -
File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'cstm-hdfs',
'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d
/etc/hadoop/conf', 'group': 'hadoop'}\n2016-03-08 05:51:51,039 -
File['/etc/hadoop/conf/topology_script.py'] {'content':
StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf',
'mode': 0755}\n2016-03-08 05:51:52,004 - The hadoop conf dir
/usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for
version 2.4.0.0-169\n2016-03-08 05:51:52,004 - Checking if need to create
versioned conf dir /etc/hadoop/2.4.0.0-169/0\n2016-03-08 05:51:52,004 -
call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169
--conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr':
-1}\n2016-03-08 05:51:52,056 - call returned (1, '/etc/hadoop/2.4.0.0-169/0
exist already', '')\n2016-03-08 05:51:52,057 - checked_call['conf-select
set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0']
{'logoutput': False, 'sudo': True, 'quiet': False}\n2016-03-08 05:51:52,112 -
checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf ->
/etc/hadoop/2.4.0.0-169/0')\n2016-03-08 05:51:52,112 - Ensuring that hadoop has
the correct symlink structure\n2016-03-08 05:51:52,112 - Using hadoop conf dir:
/usr/hdp/current/hadoop-client/conf\n2016-03-08 05:51:52,138 -
Directory['/etc/ambari-metrics-grafana/conf'] {'owner': 'cstm-ams', 'group':
'hadoop', 'recursive': True, 'mode': 0755}\n2016-03-08 05:51:52,206 - Changing
owner for /etc/ambari-metrics-grafana/conf from 2554 to cstm-ams\n2016-03-08
05:51:52,206 - Changing group for /etc/ambari-metrics-grafana/conf from 2551 to
hadoop\n2016-03-08 05:51:52,265 - Directory['/var/log/ambari-metrics-grafana']
{'owner': 'cstm-ams', 'group': 'hadoop', 'recursive': True, 'mode':
0755}\n2016-03-08 05:51:52,322 - Changing owner for
/var/log/ambari-metrics-grafana from 0 to cstm-ams\n2016-03-08 05:51:52,322 -
Changing group for /var/log/ambari-metrics-grafana from 0 to hadoop\n2016-03-08
05:51:52,368 - Directory['/var/lib/ambari-metrics-grafana'] {'owner':
'cstm-ams', 'group': 'hadoop', 'recursive': True, 'mode': 0755}\n2016-03-08
05:51:52,426 - Changing owner for /var/lib/ambari-metrics-grafana from 0 to
cstm-ams\n2016-03-08 05:51:52,426 - Changing group for
/var/lib/ambari-metrics-grafana from 0 to hadoop\n2016-03-08 05:51:52,477 -
Directory['/var/run/ambari-metrics-grafana'] {'owner': 'cstm-ams', 'group':
'hadoop', 'recursive': True, 'mode': 0755}\n2016-03-08 05:51:52,609 - Changing
owner for /var/run/ambari-metrics-grafana from 0 to cstm-ams\n2016-03-08
05:51:52,610 - Changing group for /var/run/ambari-metrics-grafana from 0 to
hadoop\n2016-03-08 05:51:52,710 -
File['/etc/ambari-metrics-grafana/conf/ams-grafana-env.sh'] {'content':
InlineTemplate(...), 'owner': 'cstm-ams', 'group': 'hadoop'}\n2016-03-08
05:51:52,916 - Writing
File['/etc/ambari-metrics-grafana/conf/ams-grafana-env.sh'] because contents
don't match\n2016-03-08 05:51:53,023 - Changing owner for
/etc/ambari-metrics-grafana/conf/ams-grafana-env.sh from 0 to
cstm-ams\n2016-03-08 05:51:53,023 - Changing group for
/etc/ambari-metrics-grafana/conf/ams-grafana-env.sh from 0 to
hadoop\n2016-03-08 05:51:53,051 -
File['/etc/ambari-metrics-grafana/conf/ams-grafana.ini'] {'content':
InlineTemplate(...), 'owner': 'cstm-ams', 'group': 'hadoop'}\n2016-03-08
05:51:53,225 - Writing File['/etc/ambari-metrics-grafana/conf/ams-grafana.ini']
because contents don't match\n2016-03-08 05:51:53,285 - Changing owner for
/etc/ambari-metrics-grafana/conf/ams-grafana.ini from 0 to cstm-ams\n2016-03-08
05:51:53,285 - Changing group for
/etc/ambari-metrics-grafana/conf/ams-grafana.ini from 0 to hadoop\n2016-03-08
05:51:53,306 - Execute['/usr/sbin/ambari-metrics-grafana stop'] {'user':
'cstm-ams'}\n2016-03-08 05:51:58,474 -
Execute['/usr/sbin/ambari-metrics-grafana start'] {'user': 'cstm-ams'}",
"structured_out" : { }
}
}
{noformat}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)