Sorry, I guess I was looking for a way to query the properties through a REST interface before things are really deployed. I am attempting to automate the configuration entirely through the Ambari REST API.
In my example from the first note, that API into the stack provides almost all of the configuration parameters. I have the following questions about that: - What's the purpose for this API? - Is there a reason the list of parameters is not complete? - How does this list get generated? Is it controlled by something defined in the stack definition? - If it's not in the stack, would it make sense to go in that direction? Does that make sense? Thanks, -Chris On Mon, Apr 7, 2014 at 2:55 PM, Siddharth Wagle <[email protected]>wrote: > Hi Chris, > > If you look at global.xml in the stack definition you should be able to > find most if not all of the above properties. > These are properties that are required to configure the cluster but do not > belong to a stack component, the web UI sets the appropriate values at > runtime. > The configuration type is "global". > > I am currently working on a Jira to refactor properties with empty values > that might be of help, > https://issues.apache.org/jira/browse/AMBARI-4921 > Should have a patch for trunk in a couple of days. > > -Sid > > > > On Mon, Apr 7, 2014 at 2:26 PM, Erin Boyd <[email protected]> wrote: > >> Try grepping in /etc/conf. >> Erin >> >> >> >> >> >> -----Original Message----- >> From: Chris Mildebrandt [[email protected]] >> Received: Monday, 07 Apr 2014, 1:23PM >> To: ambari-user [[email protected]] >> Subject: Finding valid properties for config via Ambari 1.5 >> >> >> Hello, >> >> I'd like to take a property name and match it to a configuration type >> (core-site, global-site, etc). I have found I can get a list of properties >> with their type here: >> >> >> http://host:8080/api/v1/stacks/HDP/versions/2.0.6/stackServices/?fields=configurations/StackConfigurations/type >> >> However, I also noticed there are some missing values: >> >> hive_database >> templeton.hive.properties >> hive_hostname >> hadoop.proxyuser.hive.groups >> hive_jdbc_connection_url >> hadoop_conf_dir >> hbase_tmp_dir >> yarn.scheduler.capacity.root.default.acl_administer_queue >> hive_jdbc_driver >> hadoop.proxyuser.hive.hosts >> oozie.service.HadoopAccessorService.jobTracker.whitelist >> hive_metastore_user_passwd >> fs_checkpoint_size >> apache_artifacts_download_url >> oozie.service.HadoopAccessorService.nameNode.whitelist >> dfs_exclude >> hive_database_type >> run_dir >> hadoop.proxyuser.oozie.groups >> smokeuser >> hadoop.proxyuser.hcat.hosts >> nagios_contact >> mapreduce.cluster.local.dir >> hregion_memstoreflushsize >> oozie_jdbc_connection_url >> hcat_conf_dir >> oozie_database >> yarn.nodemanager.aux-services.mapreduce.shuffle.class >> hadoop.proxyuser.oozie.hosts >> yarn.scheduler.capacity.resource-calculator >> oozie_jdbc_driver >> oozie_hostname >> hadoop.proxyuser.hcat.groups >> dfs.block.local-path-access.user >> java64_home >> gpl_artifacts_download_url >> oozie_database_type >> oozie_metastore_user_passwd >> user_group >> nagios_web_password >> hive_database_name >> >> Is there a way to programmatically know where these parameters should >> live, or that they even exist? >> >> Thanks, >> -Chris >> >> > > CONFIDENTIALITY NOTICE > NOTICE: This message is intended for the use of the individual or entity > to which it is addressed and may contain information that is confidential, > privileged and exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby notified that > any printing, copying, dissemination, distribution, disclosure or > forwarding of this communication is strictly prohibited. If you have > received this communication in error, please contact the sender immediately > and delete it from your system. Thank You.
