How does one go about looking at/adding to clusterHostInfo?
Is there an API call to make to look at those values?
Erin


----- Original Message -----
From: "Erin Boyd" <[email protected]>
To: "Nate Cole" <[email protected]>, "Mahadev Konar" 
<[email protected]>, "Yusaku Sako" <[email protected]>, "Jaimin 
Jetly" <[email protected]>, [email protected]
Sent: Monday, June 23, 2014 3:12:11 PM
Subject: help....config dictionary errors galore

Hi,
Scott Creeley and I have been trying to get all the services running in the 
2.1.GlusterFS stack.
Several of the services won't start due to an error like this:

  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py",
 line 75, in __getattr__
    raise Fail("Configuration parameter '"+self.name+"' was not found in 
configurations dictionary!")
Fail: Configuration parameter 'user_group' was not found in configurations 
dictionary!


other configuration paramters are slave_hosts
namenode_hosts

etc...


So while some of these are a parameter I would expect for Hadoop (like 
user_group), parameters like namenode_hosts is specific to HDFS.

Therefore, is there a way to create a flag in Ambari that says 'noHDFS' that 
can be used to create a conditional around these values? Should this
just be a standard global, or should the value be loaded at a different level, 
like a system level property?

HBase seems to have a lot of assumptions around HDFS being installed (ie 
loading values from the hdfs-site.xml).

Is there a best practice for such processes?

Let us know.
Thanks,
Erin

Reply via email to