I tried to install ganglia-server using blueprint like:

"host_groups" : [
    {
      "name" : "master_host_group",
      "components" : [
        {
          "name" : "NAMENODE"
        },
        {
          "name" : "SECONDARY_NAMENODE"
        },
        {
          "name" : "APP_TIMELINE_SERVER"
        },
        {
          "name" : "RESOURCEMANAGER"
        },
        {
          "name" : "HISTORYSERVER"
        },
        {
          "name" : "ZOOKEEPER_SERVER"
        },
        {
          "name" : "SPARK2_JOBHISTORYSERVER"
        },
        {
          "name" : "OOZIE_SERVER"
        },
        {
          "name" : "ZEPPELIN_MASTER"
        },
     *   {*
*          "name" : "GANGLIA_SERVER"*
*        }*
      ],
      "cardinality" : "1"
    },

However, it failed due to:

 resource_management.core.exceptions.Fail: User 'apache' doesn't exist.

The complete log file is attached. I don't understand why ambari cannot
create whatever it need to make the work done. Without GANGLIA_SERVER, the
blueprint can successfully create a hadoop cluster.

I am using horton ambari 2.6.4.

Appreciate any clue. Thanks.
Apache Ambari
Ambari
spark_cluster 0 ops 33 alerts

    Dashboard
    Services
    Hosts
    Alerts
    Admin

namenode.subnet1.hadoop.oraclevcn.com
 Back

    Summary
    Configs
    Alerts 14
    Versions

Host Actions
Components
Host needs 1 component restarted
 
App Timeline Server  /  YARN
Started
 
Ganglia Server  /  Ganglia
Install Failed
 
History Server  /  MapReduce2
Install Failed
 
NameNode  /  HDFS
Started
 
Oozie Server  /  Oozie
Install Failed
 
ResourceManager  /  YARN
Install Failed
 
SNameNode  /  HDFS
Stopped
 
Spark2 History S...  /  Spark2
Install Failed
 
Zeppelin Notebook  /  Zeppelin No...
Install Failed
 
ZooKeeper Server  /  ZooKeeper
Install Failed
 
Ganglia Monitor  /  Ganglia
Install Failed
Clients /
HDFS Client  , MapReduce2 Client  , YARN Client  
Summary

Hostname:
     namenode.subnet1.hadoop.oraclevcn.com
IP Address:
     10.0.1.183
Rack:
     /default-rack 
OS:
     centos7 (x86_64)
Cores (CPU):
     4 (4)
Disk:
     Data Unavailable
Memory:
     13.54GB
Load Avg:
     
Heartbeat:
     less than a minute ago
Current Version:
     
Unlimited JCE installed:
     false

Host Metrics
No Data Available
CPU Usage
    
No Data Available
Disk Usage
No Data Available
Load
    
No Data Available
Memory Usage
No Data Available
Network Usage
    
No Data Available
Processes
x
namenode.subnet1.hadoop.oraclevcn.com
 Tasks
Copy
Open
Ganglia Server Install

stderr:   /var/lib/ambari-agent/data/errors-75.txt

Traceback (most recent call last):
  File 
"/var/lib/ambari-agent/cache/common-services/GANGLIA/3.5.0/package/scripts/ganglia_server.py",
 line 125, in <module>
    GangliaServer().execute()
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 375, in execute
    method(env)
  File 
"/var/lib/ambari-agent/cache/common-services/GANGLIA/3.5.0/package/scripts/ganglia_server.py",
 line 36, in install
    self.configure(env)
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 120, in locking_configure
    original_configure(obj, *args, **kw)
  File 
"/var/lib/ambari-agent/cache/common-services/GANGLIA/3.5.0/package/scripts/ganglia_server.py",
 line 73, in configure
    change_permission()
  File 
"/var/lib/ambari-agent/cache/common-services/GANGLIA/3.5.0/package/scripts/ganglia_server.py",
 line 92, in change_permission
    recursive_ownership = True,
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
line 166, in __init__
    self.env.run()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 160, in run
    self.run_action(resource, action)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 124, in run_action
    provider_action()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
 line 199, in action_create
    recursion_follow_links=self.resource.recursion_follow_links, 
safemode_folders=self.resource.safemode_folders)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
 line 53, in _ensure_metadata
    raise Fail("User '{0}' doesn't exist".format(user))
resource_management.core.exceptions.Fail: User 'apache' doesn't exist

stdout:   /var/lib/ambari-agent/data/output-75.txt

2018-02-19 01:19:32,885 - Stack Feature Version Info: Cluster Stack=2.6, 
Command Stack=None, Command Version=None -> 2.6
2018-02-19 01:19:32,890 - Using hadoop conf dir: 
/usr/hdp/current/hadoop-client/conf
2018-02-19 01:19:32,891 - Group['livy'] {}
2018-02-19 01:19:32,892 - Group['spark'] {}
2018-02-19 01:19:32,893 - Group['hdfs'] {}
2018-02-19 01:19:32,893 - Group['zeppelin'] {}
2018-02-19 01:19:32,893 - Group['hadoop'] {}
2018-02-19 01:19:32,893 - Group['nobody'] {}
2018-02-19 01:19:32,893 - Group['users'] {}
2018-02-19 01:19:32,894 - User['livy'] {'gid': 'hadoop', 
'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-02-19 01:19:32,895 - User['zookeeper'] {'gid': 'hadoop', 
'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-02-19 01:19:32,896 - User['spark'] {'gid': 'hadoop', 
'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-02-19 01:19:32,897 - User['oozie'] {'gid': 'hadoop', 
'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-02-19 01:19:32,897 - User['ambari-qa'] {'gid': 'hadoop', 
'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-02-19 01:19:32,898 - User['hdfs'] {'gid': 'hadoop', 
'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2018-02-19 01:19:32,899 - User['zeppelin'] {'gid': 'hadoop', 
'fetch_nonlocal_groups': True, 'groups': [u'zeppelin', u'hadoop'], 'uid': None}
2018-02-19 01:19:32,900 - User['nobody'] {'gid': 'hadoop', 
'fetch_nonlocal_groups': True, 'groups': [u'nobody'], 'uid': None}
2018-02-19 01:19:32,900 - User['yarn'] {'gid': 'hadoop', 
'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-02-19 01:19:32,901 - User['mapred'] {'gid': 'hadoop', 
'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-02-19 01:19:32,902 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] 
{'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-02-19 01:19:32,903 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh 
ambari-qa 
/tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa
 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-02-19 01:19:32,911 - Skipping 
Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa 
/tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa
 0'] due to not_if
2018-02-19 01:19:32,911 - Group['hdfs'] {}
2018-02-19 01:19:32,912 - User['hdfs'] {'fetch_nonlocal_groups': True, 
'groups': ['hdfs', u'hdfs']}
2018-02-19 01:19:32,912 - FS Type: 
2018-02-19 01:19:32,912 - Directory['/etc/hadoop'] {'mode': 0755}
2018-02-19 01:19:32,927 - 
File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': 
InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-02-19 01:19:32,927 - Writing 
File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] because contents 
don't match
2018-02-19 01:19:32,928 - 
Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 
'group': 'hadoop', 'mode': 01777}
2018-02-19 01:19:32,941 - Repository['HDP-2.6-repo-1'] {'append_to_file': 
False, 'base_url': 
'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.4.0', 
'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': 
'[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list 
%}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif 
%}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 
'mirror_list': None}
2018-02-19 01:19:32,948 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] 
{'content': 
'[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.4.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-02-19 01:19:32,949 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] 
because contents don't match
2018-02-19 01:19:32,949 - Repository with url 
http://public-repo-1.hortonworks.com/HDP-GPL/centos7/2.x/updates/2.6.4.0 is not 
created due to its tags: set([u'GPL'])
2018-02-19 01:19:32,949 - Repository['HDP-UTILS-1.1.0.22-repo-1'] 
{'append_to_file': True, 'base_url': 
'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7', 
'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': 
'[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list 
%}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif 
%}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 
'mirror_list': None}
2018-02-19 01:19:32,953 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] 
{'content': 
'[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.4.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-1]\nname=HDP-UTILS-1.1.0.22-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-02-19 01:19:32,953 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] 
because contents don't match
2018-02-19 01:19:32,953 - Package['unzip'] {'retry_on_repo_unavailability': 
False, 'retry_count': 5}
2018-02-19 01:19:33,040 - Skipping installation of existing package unzip
2018-02-19 01:19:33,041 - Package['curl'] {'retry_on_repo_unavailability': 
False, 'retry_count': 5}
2018-02-19 01:19:33,052 - Skipping installation of existing package curl
2018-02-19 01:19:33,052 - Package['hdp-select'] 
{'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-02-19 01:19:33,063 - Skipping installation of existing package hdp-select
2018-02-19 01:19:33,067 - The repository with version 2.6.4.0-91 for this 
command has been marked as resolved. It will be used to report the version of 
the component which was installed
2018-02-19 01:19:33,072 - Skipping stack-select on GANGLIA because it does not 
exist in the stack-select package structure.
2018-02-19 01:19:33,257 - Directory['/usr/libexec/hdp/ganglia'] {'owner': 
'root', 'group': 'root', 'create_parents': True}
2018-02-19 01:19:33,258 - File['/etc/init.d/hdp-gmetad'] {'content': 
StaticFile('gmetad.init'), 'mode': 0755}
2018-02-19 01:19:33,259 - File['/etc/init.d/hdp-gmond'] {'content': 
StaticFile('gmond.init'), 'mode': 0755}
2018-02-19 01:19:33,260 - File['/usr/libexec/hdp/ganglia/checkGmond.sh'] 
{'content': StaticFile('checkGmond.sh'), 'mode': 0755}
2018-02-19 01:19:33,261 - File['/usr/libexec/hdp/ganglia/checkRrdcached.sh'] 
{'content': StaticFile('checkRrdcached.sh'), 'mode': 0755}
2018-02-19 01:19:33,262 - File['/usr/libexec/hdp/ganglia/gmetadLib.sh'] 
{'content': StaticFile('gmetadLib.sh'), 'mode': 0755}
2018-02-19 01:19:33,263 - File['/usr/libexec/hdp/ganglia/gmondLib.sh'] 
{'content': StaticFile('gmondLib.sh'), 'mode': 0755}
2018-02-19 01:19:33,264 - File['/usr/libexec/hdp/ganglia/rrdcachedLib.sh'] 
{'content': StaticFile('rrdcachedLib.sh'), 'mode': 0755}
2018-02-19 01:19:33,264 - File['/usr/libexec/hdp/ganglia/setupGanglia.sh'] 
{'content': StaticFile('setupGanglia.sh'), 'mode': 0755}
2018-02-19 01:19:33,265 - File['/usr/libexec/hdp/ganglia/startGmetad.sh'] 
{'content': StaticFile('startGmetad.sh'), 'mode': 0755}
2018-02-19 01:19:33,266 - File['/usr/libexec/hdp/ganglia/startGmond.sh'] 
{'content': StaticFile('startGmond.sh'), 'mode': 0755}
2018-02-19 01:19:33,267 - File['/usr/libexec/hdp/ganglia/startRrdcached.sh'] 
{'content': StaticFile('startRrdcached.sh'), 'mode': 0755}
2018-02-19 01:19:33,267 - File['/usr/libexec/hdp/ganglia/stopGmetad.sh'] 
{'content': StaticFile('stopGmetad.sh'), 'mode': 0755}
2018-02-19 01:19:33,268 - File['/usr/libexec/hdp/ganglia/stopGmond.sh'] 
{'content': StaticFile('stopGmond.sh'), 'mode': 0755}
2018-02-19 01:19:33,269 - File['/usr/libexec/hdp/ganglia/stopRrdcached.sh'] 
{'content': StaticFile('stopRrdcached.sh'), 'mode': 0755}
2018-02-19 01:19:33,270 - File['/usr/libexec/hdp/ganglia/teardownGanglia.sh'] 
{'content': StaticFile('teardownGanglia.sh'), 'mode': 0755}
2018-02-19 01:19:33,271 - 
TemplateConfig['/usr/libexec/hdp/ganglia/gangliaClusters.conf'] {'owner': 
'root', 'template_tag': None, 'group': 'root', 'mode': 0755}
2018-02-19 01:19:33,275 - File['/usr/libexec/hdp/ganglia/gangliaClusters.conf'] 
{'content': Template('gangliaClusters.conf.j2'), 'owner': 'root', 'group': 
'root', 'mode': 0755}
2018-02-19 01:19:33,276 - 
TemplateConfig['/usr/libexec/hdp/ganglia/gangliaEnv.sh'] {'owner': 'root', 
'template_tag': None, 'group': 'root', 'mode': 0755}
2018-02-19 01:19:33,278 - File['/usr/libexec/hdp/ganglia/gangliaEnv.sh'] 
{'content': Template('gangliaEnv.sh.j2'), 'owner': 'root', 'group': 'root', 
'mode': 0755}
2018-02-19 01:19:33,279 - 
TemplateConfig['/usr/libexec/hdp/ganglia/gangliaLib.sh'] {'owner': 'root', 
'template_tag': None, 'group': 'root', 'mode': 0755}
2018-02-19 01:19:33,281 - File['/usr/libexec/hdp/ganglia/gangliaLib.sh'] 
{'content': Template('gangliaLib.sh.j2'), 'owner': 'root', 'group': 'root', 
'mode': 0755}
2018-02-19 01:19:33,282 - Execute['/usr/libexec/hdp/ganglia/setupGanglia.sh -t 
-o root -g hadoop'] {'path': ['/usr/libexec/hdp/ganglia', '/usr/sbin', 
'/sbin:/usr/local/bin', '/bin', '/usr/bin']}
2018-02-19 01:19:33,328 - Directory['/var/run/ganglia'] {'create_parents': 
True, 'mode': 0755}
2018-02-19 01:19:33,329 - Directory['/var/lib/ganglia/dwoo'] {'owner': 
'apache', 'create_parents': True, 'recursive_ownership': True, 'mode': 0755}
2018-02-19 01:19:33,334 - The repository with version 2.6.4.0-91 for this 
command has been marked as resolved. It will be used to report the version of 
the component which was installed
2018-02-19 01:19:33,339 - Skipping stack-select on GANGLIA because it does not 
exist in the stack-select package structure.

Command failed after 1 tries

 Do not show this dialog again when starting a background operation
Licensed under the Apache License, Version 2.0.
See third-party tools/resources that Ambari uses and their respective authors

Reply via email to