Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/MAPREDUCE/configuration/capacity-scheduler.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/MAPREDUCE/configuration/capacity-scheduler.xml?rev=1453165&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/MAPREDUCE/configuration/capacity-scheduler.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/MAPREDUCE/configuration/capacity-scheduler.xml Wed Mar 6 06:01:25 2013 @@ -0,0 +1,195 @@ +<?xml version="1.0"?> + +<!-- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> + +<!-- This is the configuration file for the resource manager in Hadoop. --> +<!-- You can configure various scheduling parameters related to queues. --> +<!-- The properties for a queue follow a naming convention,such as, --> +<!-- mapred.capacity-scheduler.queue.<queue-name>.property-name. --> + +<configuration> + + <property> + <name>mapred.capacity-scheduler.maximum-system-jobs</name> + <value>3000</value> + <description>Maximum number of jobs in the system which can be initialized, + concurrently, by the CapacityScheduler. + </description> + </property> + + <property> + <name>mapred.capacity-scheduler.queue.default.capacity</name> + <value>100</value> + <description>Percentage of the number of slots in the cluster that are + to be available for jobs in this queue. + </description> + </property> + + <property> + <name>mapred.capacity-scheduler.queue.default.maximum-capacity</name> + <value>-1</value> + <description> + maximum-capacity defines a limit beyond which a queue cannot use the capacity of the cluster. + This provides a means to limit how much excess capacity a queue can use. By default, there is no limit. + The maximum-capacity of a queue can only be greater than or equal to its minimum capacity. + Default value of -1 implies a queue can use complete capacity of the cluster. + + This property could be to curtail certain jobs which are long running in nature from occupying more than a + certain percentage of the cluster, which in the absence of pre-emption, could lead to capacity guarantees of + other queues being affected. + + One important thing to note is that maximum-capacity is a percentage , so based on the cluster's capacity + the max capacity would change. So if large no of nodes or racks get added to the cluster , max Capacity in + absolute terms would increase accordingly. + </description> + </property> + + <property> + <name>mapred.capacity-scheduler.queue.default.supports-priority</name> + <value>false</value> + <description>If true, priorities of jobs will be taken into + account in scheduling decisions. + </description> + </property> + + <property> + <name>mapred.capacity-scheduler.queue.default.minimum-user-limit-percent</name> + <value>100</value> + <description> Each queue enforces a limit on the percentage of resources + allocated to a user at any given time, if there is competition for them. + This user limit can vary between a minimum and maximum value. The former + depends on the number of users who have submitted jobs, and the latter is + set to this property value. For example, suppose the value of this + property is 25. If two users have submitted jobs to a queue, no single + user can use more than 50% of the queue resources. If a third user submits + a job, no single user can use more than 33% of the queue resources. With 4 + or more users, no user can use more than 25% of the queue's resources. A + value of 100 implies no user limits are imposed. + </description> + </property> + + <property> + <name>mapred.capacity-scheduler.queue.default.user-limit-factor</name> + <value>1</value> + <description>The multiple of the queue capacity which can be configured to + allow a single user to acquire more slots. + </description> + </property> + + <property> + <name>mapred.capacity-scheduler.queue.default.maximum-initialized-active-tasks</name> + <value>200000</value> + <description>The maximum number of tasks, across all jobs in the queue, + which can be initialized concurrently. Once the queue's jobs exceed this + limit they will be queued on disk. + </description> + </property> + + <property> + <name>mapred.capacity-scheduler.queue.default.maximum-initialized-active-tasks-per-user</name> + <value>100000</value> + <description>The maximum number of tasks per-user, across all the of the + user's jobs in the queue, which can be initialized concurrently. Once the + user's jobs exceed this limit they will be queued on disk. + </description> + </property> + + <property> + <name>mapred.capacity-scheduler.queue.default.init-accept-jobs-factor</name> + <value>10</value> + <description>The multipe of (maximum-system-jobs * queue-capacity) used to + determine the number of jobs which are accepted by the scheduler. + </description> + </property> + + <!-- The default configuration settings for the capacity task scheduler --> + <!-- The default values would be applied to all the queues which don't have --> + <!-- the appropriate property for the particular queue --> + <property> + <name>mapred.capacity-scheduler.default-supports-priority</name> + <value>false</value> + <description>If true, priorities of jobs will be taken into + account in scheduling decisions by default in a job queue. + </description> + </property> + + <property> + <name>mapred.capacity-scheduler.default-minimum-user-limit-percent</name> + <value>100</value> + <description>The percentage of the resources limited to a particular user + for the job queue at any given point of time by default. + </description> + </property> + + + <property> + <name>mapred.capacity-scheduler.default-user-limit-factor</name> + <value>1</value> + <description>The default multiple of queue-capacity which is used to + determine the amount of slots a single user can consume concurrently. + </description> + </property> + + <property> + <name>mapred.capacity-scheduler.default-maximum-active-tasks-per-queue</name> + <value>200000</value> + <description>The default maximum number of tasks, across all jobs in the + queue, which can be initialized concurrently. Once the queue's jobs exceed + this limit they will be queued on disk. + </description> + </property> + + <property> + <name>mapred.capacity-scheduler.default-maximum-active-tasks-per-user</name> + <value>100000</value> + <description>The default maximum number of tasks per-user, across all the of + the user's jobs in the queue, which can be initialized concurrently. Once + the user's jobs exceed this limit they will be queued on disk. + </description> + </property> + + <property> + <name>mapred.capacity-scheduler.default-init-accept-jobs-factor</name> + <value>10</value> + <description>The default multipe of (maximum-system-jobs * queue-capacity) + used to determine the number of jobs which are accepted by the scheduler. + </description> + </property> + + <!-- Capacity scheduler Job Initialization configuration parameters --> + <property> + <name>mapred.capacity-scheduler.init-poll-interval</name> + <value>5000</value> + <description>The amount of time in miliseconds which is used to poll + the job queues for jobs to initialize. + </description> + </property> + <property> + <name>mapred.capacity-scheduler.init-worker-threads</name> + <value>5</value> + <description>Number of worker threads which would be used by + Initialization poller to initialize jobs in a set of queue. + If number mentioned in property is equal to number of job queues + then a single thread would initialize jobs in a queue. If lesser + then a thread would get a set of queues assigned. If the number + is greater then number of threads would be equal to number of + job queues. + </description> + </property> + +</configuration>
Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/MAPREDUCE/configuration/core-site.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/MAPREDUCE/configuration/core-site.xml?rev=1453165&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/MAPREDUCE/configuration/core-site.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/MAPREDUCE/configuration/core-site.xml Wed Mar 6 06:01:25 2013 @@ -0,0 +1,20 @@ +<?xml version="1.0"?> +<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> +<!-- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> +<configuration> +</configuration> Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/MAPREDUCE/configuration/mapred-queue-acls.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/MAPREDUCE/configuration/mapred-queue-acls.xml?rev=1453165&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/MAPREDUCE/configuration/mapred-queue-acls.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/MAPREDUCE/configuration/mapred-queue-acls.xml Wed Mar 6 06:01:25 2013 @@ -0,0 +1,39 @@ +<?xml version="1.0"?> +<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> + +<!-- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> + +<!-- mapred-queue-acls.xml --> +<configuration> + + +<!-- queue default --> + + <property> + <name>mapred.queue.default.acl-submit-job</name> + <value>*</value> + </property> + + <property> + <name>mapred.queue.default.acl-administer-jobs</name> + <value>*</value> + </property> + + <!-- END ACLs --> + +</configuration> Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/MAPREDUCE/configuration/mapred-site.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/MAPREDUCE/configuration/mapred-site.xml?rev=1453165&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/MAPREDUCE/configuration/mapred-site.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/MAPREDUCE/configuration/mapred-site.xml Wed Mar 6 06:01:25 2013 @@ -0,0 +1,531 @@ +<?xml version="1.0"?> +<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> + +<!-- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> + +<!-- Put site-specific property overrides in this file. --> + +<configuration xmlns:xi="http://www.w3.org/2001/XInclude"> + +<!-- i/o properties --> + + <property> + <name>io.sort.mb</name> + <value></value> + <description>No description</description> + </property> + + <property> + <name>io.sort.record.percent</name> + <value>.2</value> + <description>No description</description> + </property> + + <property> + <name>io.sort.spill.percent</name> + <value></value> + <description>No description</description> + </property> + + <property> + <name>io.sort.factor</name> + <value>100</value> + <description>No description</description> + </property> + +<!-- map/reduce properties --> + +<property> + <name>mapred.tasktracker.tasks.sleeptime-before-sigkill</name> + <value>250</value> + <description>Normally, this is the amount of time before killing + processes, and the recommended-default is 5.000 seconds - a value of + 5000 here. In this case, we are using it solely to blast tasks before + killing them, and killing them very quickly (1/4 second) to guarantee + that we do not leave VMs around for later jobs. + </description> +</property> + + <property> + <name>mapred.job.tracker.handler.count</name> + <value>50</value> + <description> + The number of server threads for the JobTracker. This should be roughly + 4% of the number of tasktracker nodes. + </description> + </property> + + <property> + <name>mapred.system.dir</name> + <value>/mapred/system</value> + <description>No description</description> + <final>true</final> + </property> + + <property> + <name>mapred.job.tracker</name> + <!-- cluster variant --> + <value></value> + <description>No description</description> + <final>true</final> + </property> + + <property> + <name>mapred.job.tracker.http.address</name> + <!-- cluster variant --> + <value></value> + <description>No description</description> + <final>true</final> + </property> + + <property> + <!-- cluster specific --> + <name>mapred.local.dir</name> + <value></value> + <description>No description</description> + <final>true</final> + </property> + + <property> + <name>mapreduce.cluster.administrators</name> + <value> hadoop</value> + </property> + + <property> + <name>mapred.reduce.parallel.copies</name> + <value>30</value> + <description>No description</description> + </property> + + <property> + <name>mapred.tasktracker.map.tasks.maximum</name> + <value></value> + <description>No description</description> + </property> + + <property> + <name>mapred.tasktracker.reduce.tasks.maximum</name> + <value></value> + <description>No description</description> + </property> + + <property> + <name>tasktracker.http.threads</name> + <value>50</value> + </property> + + <property> + <name>mapred.map.tasks.speculative.execution</name> + <value>false</value> + <description>If true, then multiple instances of some map tasks + may be executed in parallel.</description> + </property> + + <property> + <name>mapred.reduce.tasks.speculative.execution</name> + <value>false</value> + <description>If true, then multiple instances of some reduce tasks + may be executed in parallel.</description> + </property> + + <property> + <name>mapred.reduce.slowstart.completed.maps</name> + <value>0.05</value> + </property> + + <property> + <name>mapred.inmem.merge.threshold</name> + <value>1000</value> + <description>The threshold, in terms of the number of files + for the in-memory merge process. When we accumulate threshold number of files + we initiate the in-memory merge and spill to disk. A value of 0 or less than + 0 indicates we want to DON'T have any threshold and instead depend only on + the ramfs's memory consumption to trigger the merge. + </description> + </property> + + <property> + <name>mapred.job.shuffle.merge.percent</name> + <value>0.66</value> + <description>The usage threshold at which an in-memory merge will be + initiated, expressed as a percentage of the total memory allocated to + storing in-memory map outputs, as defined by + mapred.job.shuffle.input.buffer.percent. + </description> + </property> + + <property> + <name>mapred.job.shuffle.input.buffer.percent</name> + <value>0.7</value> + <description>The percentage of memory to be allocated from the maximum heap + size to storing map outputs during the shuffle. + </description> + </property> + + <property> + <name>mapred.map.output.compression.codec</name> + <value></value> + <description>If the map outputs are compressed, how should they be + compressed + </description> + </property> + +<property> + <name>mapred.output.compression.type</name> + <value>BLOCK</value> + <description>If the job outputs are to compressed as SequenceFiles, how should + they be compressed? Should be one of NONE, RECORD or BLOCK. + </description> +</property> + + + <property> + <name>mapred.jobtracker.completeuserjobs.maximum</name> + <value>0</value> + </property> + + <property> + <name>mapred.jobtracker.taskScheduler</name> + <value></value> + </property> + + <property> + <name>mapred.jobtracker.restart.recover</name> + <value>false</value> + <description>"true" to enable (job) recovery upon restart, + "false" to start afresh + </description> + </property> + + <property> + <name>mapred.job.reduce.input.buffer.percent</name> + <value>0.0</value> + <description>The percentage of memory- relative to the maximum heap size- to + retain map outputs during the reduce. When the shuffle is concluded, any + remaining map outputs in memory must consume less than this threshold before + the reduce can begin. + </description> + </property> + + <property> + <name>mapreduce.reduce.input.limit</name> + <value>10737418240</value> + <description>The limit on the input size of the reduce. (This value + is 10 Gb.) If the estimated input size of the reduce is greater than + this value, job is failed. A value of -1 means that there is no limit + set. </description> +</property> + + + <!-- copied from kryptonite configuration --> + <property> + <name>mapred.compress.map.output</name> + <value></value> + </property> + + + <property> + <name>mapred.task.timeout</name> + <value>600000</value> + <description>The number of milliseconds before a task will be + terminated if it neither reads an input, writes an output, nor + updates its status string. + </description> + </property> + + <property> + <name>jetty.connector</name> + <value>org.mortbay.jetty.nio.SelectChannelConnector</value> + <description>No description</description> + </property> + + <property> + <name>mapred.task.tracker.task-controller</name> + <value></value> + <description> + TaskController which is used to launch and manage task execution. + </description> + </property> + + <property> + <name>mapred.child.root.logger</name> + <value>INFO,TLA</value> + </property> + + <property> + <name>mapred.child.java.opts</name> + <value></value> + + <description>No description</description> + </property> + + <property> + <name>mapred.cluster.map.memory.mb</name> + <value></value> + </property> + + <property> + <name>mapred.cluster.reduce.memory.mb</name> + <value></value> + </property> + + <property> + <name>mapred.job.map.memory.mb</name> + <value></value> + </property> + + <property> + <name>mapred.job.reduce.memory.mb</name> + <value></value> + </property> + + <property> + <name>mapred.cluster.max.map.memory.mb</name> + <value></value> + </property> + + <property> + <name>mapred.cluster.max.reduce.memory.mb</name> + <value></value> + </property> + +<property> + <name>mapred.hosts</name> + <value></value> +</property> + +<property> + <name>mapred.hosts.exclude</name> + <value></value> +</property> + +<property> + <name>mapred.max.tracker.blacklists</name> + <value>16</value> + <description> + if node is reported blacklisted by 16 successful jobs within timeout-window, it will be graylisted + </description> +</property> + +<property> + <name>mapred.healthChecker.script.path</name> + <value></value> +</property> + +<property> + <name>mapred.healthChecker.interval</name> + <value>135000</value> +</property> + +<property> + <name>mapred.healthChecker.script.timeout</name> + <value>60000</value> +</property> + +<property> + <name>mapred.job.tracker.persist.jobstatus.active</name> + <value>false</value> + <description>Indicates if persistency of job status information is + active or not. + </description> +</property> + +<property> + <name>mapred.job.tracker.persist.jobstatus.hours</name> + <value>1</value> + <description>The number of hours job status information is persisted in DFS. + The job status information will be available after it drops of the memory + queue and between jobtracker restarts. With a zero value the job status + information is not persisted at all in DFS. + </description> +</property> + +<property> + <name>mapred.job.tracker.persist.jobstatus.dir</name> + <value></value> + <description>The directory where the job status information is persisted + in a file system to be available after it drops of the memory queue and + between jobtracker restarts. + </description> +</property> + +<property> + <name>mapred.jobtracker.retirejob.check</name> + <value>10000</value> +</property> + +<property> + <name>mapred.jobtracker.retirejob.interval</name> + <value>0</value> +</property> + +<property> + <name>mapred.job.tracker.history.completed.location</name> + <value>/mapred/history/done</value> + <description>No description</description> +</property> + +<property> + <name>mapred.task.maxvmem</name> + <value></value> + <final>true</final> + <description>No description</description> +</property> + +<property> + <name>mapred.jobtracker.maxtasks.per.job</name> + <value></value> + <final>true</final> + <description>The maximum number of tasks for a single job. + A value of -1 indicates that there is no maximum. </description> +</property> + +<property> + <name>mapreduce.fileoutputcommitter.marksuccessfuljobs</name> + <value>false</value> +</property> + +<property> + <name>mapred.userlog.retain.hours</name> + <value></value> +</property> + +<property> + <name>mapred.job.reuse.jvm.num.tasks</name> + <value>1</value> + <description> + How many tasks to run per jvm. If set to -1, there is no limit + </description> + <final>true</final> +</property> + +<property> + <name>mapreduce.jobtracker.kerberos.principal</name> + <value></value> + <description> + JT user name key. + </description> +</property> + +<property> + <name>mapreduce.tasktracker.kerberos.principal</name> + <value></value> + <description> + tt user name key. "_HOST" is replaced by the host name of the task tracker. + </description> +</property> + + + <property> + <name>hadoop.job.history.user.location</name> + <value>none</value> + <final>true</final> + </property> + + + <property> + <name>mapreduce.jobtracker.keytab.file</name> + <value></value> + <description> + The keytab for the jobtracker principal. + </description> + +</property> + + <property> + <name>mapreduce.tasktracker.keytab.file</name> + <value></value> + <description>The filename of the keytab for the task tracker</description> + </property> + + <property> + <name>mapreduce.jobtracker.staging.root.dir</name> + <value>/user</value> + <description>The Path prefix for where the staging directories should be placed. The next level is always the user's + name. It is a path in the default file system.</description> + </property> + + <property> + <name>mapreduce.tasktracker.group</name> + <value>hadoop</value> + <description>The group that the task controller uses for accessing the task controller. The mapred user must be a member and users should *not* be members.</description> + + </property> + + <property> + <name>mapreduce.jobtracker.split.metainfo.maxsize</name> + <value>50000000</value> + <final>true</final> + <description>If the size of the split metainfo file is larger than this, the JobTracker will fail the job during + initialize. + </description> + </property> + <property> + <name>mapreduce.history.server.embedded</name> + <value>false</value> + <description>Should job history server be embedded within Job tracker +process</description> + <final>true</final> + </property> + + <property> + <name>mapreduce.history.server.http.address</name> + <!-- cluster variant --> + <value></value> + <description>Http address of the history server</description> + <final>true</final> + </property> + + <property> + <name>mapreduce.jobhistory.kerberos.principal</name> + <!-- cluster variant --> + <value></value> + <description>Job history user name key. (must map to same user as JT +user)</description> + </property> + + <property> + <name>mapreduce.jobhistory.keytab.file</name> + <!-- cluster variant --> + <value></value> + <description>The keytab for the job history server principal.</description> + </property> + +<property> + <name>mapred.jobtracker.blacklist.fault-timeout-window</name> + <value>180</value> + <description> + 3-hour sliding window (value is in minutes) + </description> +</property> + +<property> + <name>mapred.jobtracker.blacklist.fault-bucket-width</name> + <value>15</value> + <description> + 15-minute bucket size (value is in minutes) + </description> +</property> + +<property> + <name>mapred.queue.names</name> + <value>default</value> + <description> Comma separated list of queues configured for this jobtracker.</description> +</property> + +</configuration> Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/MAPREDUCE/metainfo.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/MAPREDUCE/metainfo.xml?rev=1453165&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/MAPREDUCE/metainfo.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/MAPREDUCE/metainfo.xml Wed Mar 6 06:01:25 2013 @@ -0,0 +1,41 @@ +<?xml version="1.0"?> +<!-- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> +<metainfo> + <user>mapred</user> + <comment>Apache Hadoop Distributed Processing Framework</comment> + <version>1.1.2</version> + + <components> + <component> + <name>JOBTRACKER</name> + <category>MASTER</category> + </component> + + <component> + <name>TASKTRACKER</name> + <category>SLAVE</category> + </component> + + <component> + <name>MAPREDUCE_CLIENT</name> + <category>CLIENT</category> + </component> + </components> + + +</metainfo> Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/NAGIOS/metainfo.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/NAGIOS/metainfo.xml?rev=1453165&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/NAGIOS/metainfo.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/NAGIOS/metainfo.xml Wed Mar 6 06:01:25 2013 @@ -0,0 +1,30 @@ +<?xml version="1.0"?> +<!-- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> +<metainfo> + <user>root</user> + <comment>Nagios Monitoring and Alerting system</comment> + <version>3.2.3</version> + + <components> + <component> + <name>NAGIOS_SERVER</name> + <category>MASTER</category> + </component> + </components> + +</metainfo> Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/OOZIE/configuration/oozie-site.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/OOZIE/configuration/oozie-site.xml?rev=1453165&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/OOZIE/configuration/oozie-site.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/OOZIE/configuration/oozie-site.xml Wed Mar 6 06:01:25 2013 @@ -0,0 +1,245 @@ +<?xml version="1.0"?> +<!-- + Licensed to the Apache Software Foundation (ASF) under one + or more contributor license agreements. See the NOTICE file + distributed with this work for additional information + regarding copyright ownership. The ASF licenses this file + to you under the Apache License, Version 2.0 (the + "License"); you may not use this file except in compliance + with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> + +<configuration> + +<!-- + Refer to the oozie-default.xml file for the complete list of + Oozie configuration properties and their default values. +--> + <property> + <name>oozie.base.url</name> + <value>http://localhost:11000/oozie</value> + <description>Base Oozie URL.</description> + </property> + + <property> + <name>oozie.system.id</name> + <value>oozie-${user.name}</value> + <description> + The Oozie system ID. + </description> + </property> + + <property> + <name>oozie.systemmode</name> + <value>NORMAL</value> + <description> + System mode for Oozie at startup. + </description> + </property> + + <property> + <name>oozie.service.AuthorizationService.security.enabled</name> + <value>true</value> + <description> + Specifies whether security (user name/admin role) is enabled or not. + If disabled any user can manage Oozie system and manage any job. + </description> + </property> + + <property> + <name>oozie.service.PurgeService.older.than</name> + <value>30</value> + <description> + Jobs older than this value, in days, will be purged by the PurgeService. + </description> + </property> + + <property> + <name>oozie.service.PurgeService.purge.interval</name> + <value>3600</value> + <description> + Interval at which the purge service will run, in seconds. + </description> + </property> + + <property> + <name>oozie.service.CallableQueueService.queue.size</name> + <value>1000</value> + <description>Max callable queue size</description> + </property> + + <property> + <name>oozie.service.CallableQueueService.threads</name> + <value>10</value> + <description>Number of threads used for executing callables</description> + </property> + + <property> + <name>oozie.service.CallableQueueService.callable.concurrency</name> + <value>3</value> + <description> + Maximum concurrency for a given callable type. + Each command is a callable type (submit, start, run, signal, job, jobs, suspend,resume, etc). + Each action type is a callable type (Map-Reduce, Pig, SSH, FS, sub-workflow, etc). + All commands that use action executors (action-start, action-end, action-kill and action-check) use + the action type as the callable type. + </description> + </property> + + <property> + <name>oozie.service.coord.normal.default.timeout</name> + <value>120</value> + <description>Default timeout for a coordinator action input check (in minutes) for normal job. + -1 means infinite timeout</description> + </property> + + <property> + <name>oozie.db.schema.name</name> + <value>oozie</value> + <description> + Oozie DataBase Name + </description> + </property> + + <property> + <name>oozie.service.HadoopAccessorService.jobTracker.whitelist</name> + <value> </value> + <description> + Whitelisted job tracker for Oozie service. + </description> + </property> + + <property> + <name>oozie.authentication.type</name> + <value>simple</value> + <description> + </description> + </property> + + <property> + <name>oozie.service.HadoopAccessorService.nameNode.whitelist</name> + <value> </value> + <description> + </description> + </property> + + <property> + <name>oozie.service.WorkflowAppService.system.libpath</name> + <value>/user/${user.name}/share/lib</value> + <description> + System library path to use for workflow applications. + This path is added to workflow application if their job properties sets + the property 'oozie.use.system.libpath' to true. + </description> + </property> + + <property> + <name>use.system.libpath.for.mapreduce.and.pig.jobs</name> + <value>false</value> + <description> + If set to true, submissions of MapReduce and Pig jobs will include + automatically the system library path, thus not requiring users to + specify where the Pig JAR files are. Instead, the ones from the system + library path are used. + </description> + </property> + <property> + <name>oozie.authentication.kerberos.name.rules</name> + <value> + RULE:[2:$1@$0]([jt]t@.*TODO-KERBEROS-DOMAIN)s/.*/TODO-MAPREDUSER/ + RULE:[2:$1@$0]([nd]n@.*TODO-KERBEROS-DOMAIN)s/.*/TODO-HDFSUSER/ + RULE:[2:$1@$0](hm@.*TODO-KERBEROS-DOMAIN)s/.*/TODO-HBASE-USER/ + RULE:[2:$1@$0](rs@.*TODO-KERBEROS-DOMAIN)s/.*/TODO-HBASE-USER/ + DEFAULT + </value> + <description>The mapping from kerberos principal names to local OS user names.</description> + </property> + <property> + <name>oozie.service.HadoopAccessorService.hadoop.configurations</name> + <value>*=/etc/hadoop/conf</value> + <description> + Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the HOST:PORT of + the Hadoop service (JobTracker, HDFS). The wildcard '*' configuration is + used when there is no exact match for an authority. The HADOOP_CONF_DIR contains + the relevant Hadoop *-site.xml files. If the path is relative is looked within + the Oozie configuration directory; though the path can be absolute (i.e. to point + to Hadoop client conf/ directories in the local filesystem. + </description> + </property> + <property> + <name>oozie.service.ActionService.executor.ext.classes</name> + <value> + org.apache.oozie.action.email.EmailActionExecutor, + org.apache.oozie.action.hadoop.HiveActionExecutor, + org.apache.oozie.action.hadoop.ShellActionExecutor, + org.apache.oozie.action.hadoop.SqoopActionExecutor, + org.apache.oozie.action.hadoop.DistcpActionExecutor + </value> + </property> + + <property> + <name>oozie.service.SchemaService.wf.ext.schemas</name> + <value>shell-action-0.1.xsd,email-action-0.1.xsd,hive-action-0.2.xsd,sqoop-action-0.2.xsd,ssh-action-0.1.xsd,distcp-action-0.1.xsd</value> + </property> + <property> + <name>oozie.service.JPAService.create.db.schema</name> + <value>false</value> + <description> + Creates Oozie DB. + + If set to true, it creates the DB schema if it does not exist. If the DB schema exists is a NOP. + If set to false, it does not create the DB schema. If the DB schema does not exist it fails start up. + </description> + </property> + + <property> + <name>oozie.service.JPAService.jdbc.driver</name> + <value>org.apache.derby.jdbc.EmbeddedDriver</value> + <description> + JDBC driver class. + </description> + </property> + + <property> + <name>oozie.service.JPAService.jdbc.url</name> + <value>jdbc:derby:${oozie.data.dir}/${oozie.db.schema.name}-db;create=true</value> + <description> + JDBC URL. + </description> + </property> + + <property> + <name>oozie.service.JPAService.jdbc.username</name> + <value>sa</value> + <description> + DB user name. + </description> + </property> + + <property> + <name>oozie.service.JPAService.jdbc.password</name> + <value> </value> + <description> + DB user password. + + IMPORTANT: if password is emtpy leave a 1 space string, the service trims the value, + if empty Configuration assumes it is NULL. + </description> + </property> + + <property> + <name>oozie.service.JPAService.pool.max.active.conn</name> + <value>10</value> + <description> + Max number of connections. + </description> + </property> +</configuration> \ No newline at end of file Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/OOZIE/metainfo.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/OOZIE/metainfo.xml?rev=1453165&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/OOZIE/metainfo.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/OOZIE/metainfo.xml Wed Mar 6 06:01:25 2013 @@ -0,0 +1,35 @@ +<?xml version="1.0"?> +<!-- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> +<metainfo> + <user>root</user> + <comment>System for workflow coordination and execution of Apache Hadoop jobs</comment> + <version>3.2.0</version> + + <components> + <component> + <name>OOZIE_SERVER</name> + <category>MASTER</category> + </component> + + <component> + <name>OOZIE_CLIENT</name> + <category>CLIENT</category> + </component> + </components> + +</metainfo> Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/PIG/configuration/pig.properties URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/PIG/configuration/pig.properties?rev=1453165&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/PIG/configuration/pig.properties (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/PIG/configuration/pig.properties Wed Mar 6 06:01:25 2013 @@ -0,0 +1,52 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Pig default configuration file. All values can be overwritten by pig.properties and command line arguments. +# see bin/pig -help + +# brief logging (no timestamps) +brief=false + +#debug level, INFO is default +debug=INFO + +#verbose print all log messages to screen (default to print only INFO and above to screen) +verbose=false + +#exectype local|mapreduce, mapreduce is default +exectype=mapreduce + +#Enable insertion of information about script into hadoop job conf +pig.script.info.enabled=true + +#Do not spill temp files smaller than this size (bytes) +pig.spill.size.threshold=5000000 +#EXPERIMENT: Activate garbage collection when spilling a file bigger than this size (bytes) +#This should help reduce the number of files being spilled. +pig.spill.gc.activation.size=40000000 + +#the following two parameters are to help estimate the reducer number +pig.exec.reducers.bytes.per.reducer=1000000000 +pig.exec.reducers.max=999 + +#Temporary location to store the intermediate data. +pig.temp.dir=/tmp/ + +#Threshold for merging FRJoin fragment files +pig.files.concatenation.threshold=100 +pig.optimistic.files.concatenation=false; + +pig.disable.counter=false Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/PIG/metainfo.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/PIG/metainfo.xml?rev=1453165&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/PIG/metainfo.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/PIG/metainfo.xml Wed Mar 6 06:01:25 2013 @@ -0,0 +1,30 @@ +<?xml version="1.0"?> +<!-- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> +<metainfo> + <user>root</user> + <comment>Scripting platform for analyzing large datasets</comment> + <version>0.10.1</version> + + <components> + <component> + <name>PIG</name> + <category>CLIENT</category> + </component> + </components> + +</metainfo> Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/SQOOP/metainfo.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/SQOOP/metainfo.xml?rev=1453165&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/SQOOP/metainfo.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/SQOOP/metainfo.xml Wed Mar 6 06:01:25 2013 @@ -0,0 +1,30 @@ +<?xml version="1.0"?> +<!-- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> +<metainfo> + <user>root</user> + <comment>Tool for transferring bulk data between Apache Hadoop and structured data stores such as relational databases</comment> + <version>1.4.2</version> + + <components> + <component> + <name>SQOOP</name> + <category>CLIENT</category> + </component> + </components> + +</metainfo> Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/WEBHCAT/configuration/webhcat-site.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/WEBHCAT/configuration/webhcat-site.xml?rev=1453165&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/WEBHCAT/configuration/webhcat-site.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/WEBHCAT/configuration/webhcat-site.xml Wed Mar 6 06:01:25 2013 @@ -0,0 +1,126 @@ +<?xml version="1.0" encoding="UTF-8"?> +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +regarding copyright ownership. The ASF licenses this file +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + +http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +--> + +<!-- The default settings for Templeton. --> +<!-- Edit templeton-site.xml to change settings for your local --> +<!-- install. --> + +<configuration> + + <property> + <name>templeton.port</name> + <value>50111</value> + <description>The HTTP port for the main server.</description> + </property> + + <property> + <name>templeton.hadoop.conf.dir</name> + <value>/etc/hadoop/conf</value> + <description>The path to the Hadoop configuration.</description> + </property> + + <property> + <name>templeton.jar</name> + <value>/usr/lib/hcatalog/share/webhcat/svr/webhcat.jar</value> + <description>The path to the Templeton jar file.</description> + </property> + + <property> + <name>templeton.libjars</name> + <value>/usr/lib/zookeeper/zookeeper.jar</value> + <description>Jars to add the the classpath.</description> + </property> + + + <property> + <name>templeton.hadoop</name> + <value>/usr/bin/hadoop</value> + <description>The path to the Hadoop executable.</description> + </property> + + <property> + <name>templeton.pig.archive</name> + <value>hdfs:///apps/webhcat/pig.tar.gz</value> + <description>The path to the Pig archive.</description> + </property> + + <property> + <name>templeton.pig.path</name> + <value>pig.tar.gz/pig/bin/pig</value> + <description>The path to the Pig executable.</description> + </property> + + <property> + <name>templeton.hcat</name> + <value>/usr/bin/hcat</value> + <description>The path to the hcatalog executable.</description> + </property> + + <property> + <name>templeton.hive.archive</name> + <value>hdfs:///apps/webhcat/hive.tar.gz</value> + <description>The path to the Hive archive.</description> + </property> + + <property> + <name>templeton.hive.path</name> + <value>hive.tar.gz/hive/bin/hive</value> + <description>The path to the Hive executable.</description> + </property> + + <property> + <name>templeton.hive.properties</name> + <value></value> + <description>Properties to set when running hive.</description> + </property> + + + <property> + <name>templeton.zookeeper.hosts</name> + <value></value> + <description>ZooKeeper servers, as comma separated host:port pairs</description> + </property> + + <property> + <name>templeton.storage.class</name> + <value>org.apache.hcatalog.templeton.tool.ZooKeeperStorage</value> + <description>The class to use as storage</description> + </property> + + <property> + <name>templeton.override.enabled</name> + <value>false</value> + <description> + Enable the override path in templeton.override.jars + </description> + </property> + + <property> + <name>templeton.streaming.jar</name> + <value>hdfs:///apps/webhcat/hadoop-streaming.jar</value> + <description>The hdfs path to the Hadoop streaming jar file.</description> + </property> + + <property> + <name>templeton.exec.timeout</name> + <value>60000</value> + <description>Time out for templeton api</description> + </property> + +</configuration> Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/WEBHCAT/metainfo.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/WEBHCAT/metainfo.xml?rev=1453165&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/WEBHCAT/metainfo.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/WEBHCAT/metainfo.xml Wed Mar 6 06:01:25 2013 @@ -0,0 +1,31 @@ +<?xml version="1.0"?> +<!-- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> +<metainfo> + <user>root</user> + <comment>This is comment for WEBHCAT service</comment> + <version>0.5.0</version> + + <components> + <component> + <name>WEBHCAT_SERVER</name> + <category>MASTER</category> + </component> + </components> + + +</metainfo> Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/ZOOKEEPER/metainfo.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/ZOOKEEPER/metainfo.xml?rev=1453165&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/ZOOKEEPER/metainfo.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.3.0/services/ZOOKEEPER/metainfo.xml Wed Mar 6 06:01:25 2013 @@ -0,0 +1,35 @@ +<?xml version="1.0"?> +<!-- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> +<metainfo> + <user>root</user> + <comment>This is comment for ZOOKEEPER service</comment> + <version>3.4.5</version> + + <components> + <component> + <name>ZOOKEEPER_SERVER</name> + <category>MASTER</category> + </component> + + <component> + <name>ZOOKEEPER_CLIENT</name> + <category>CLIENT</category> + </component> + </components> + +</metainfo>
