http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2a9fa84/hadoop-common-project/hadoop-common/src/site/markdown/registry/registry-security.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/registry/registry-security.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/registry/registry-security.md
new file mode 100644
index 0000000..6317681
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/registry/registry-security.md
@@ -0,0 +1,120 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+# Registry Security
+
+This document describes how security is implemented in the service registry
+
+In a non-Kerberos-enabled Hadoop cluster, the Registry does not offer any
+security at all: the registry is world writeable.
+
+This document is therefore relevant only to secure clusters.
+
+## Security Model
+
+The security model of the registry is designed to meet the following goals
+a secure registry:
+1. Deliver functional security on a secure ZK installation.
+1. Allow the RM to create per-user regions of the registration space
+1. Allow applications belonging to a user to write registry entries
+into their part of the space. These may be short-lived or long-lived
+YARN applications,  or they may be be static applications.
+1. Prevent other users from writing into another user's part of the registry.
+1. Allow system services to register to a `/services` section of the registry.
+1. Provide read access to clients of a registry.
+1. Permit future support of DNS
+1. Permit the future support of registering data private to a user.
+This allows a service to publish binding credentials (keys &c) for clients to 
use.
+1. Not require a ZK keytab on every user's home directory in a YARN cluster.
+This implies that kerberos credentials cannot be used by YARN applications.
+
+
+ZK security uses an ACL model, documented in
+[Zookeeper and 
SASL](https://cwiki.apache.org/confluence/display/ZOOKEEPER/Zookeeper+and+SASL)
+In which different authentication schemes may be used to restrict access
+to different znodes. This permits the registry to use a mixed
+Kerberos + Private password model.
+
+* The YARN-based registry (the `RMRegistryOperationsService`), uses kerberos
+as the authentication mechanism for YARN itself.
+* The registry configures the base of the registry to be writeable only by
+itself and other hadoop system accounts holding the relevant kerberos 
credentials.
+* The user specific parts of the tree are also configured to allow the same
+system accounts to write and manipulate that part of the tree.
+* User accounts are created with a `(username,password)` keypair granted
+write access to their part of the tree.
+* The secret part of the keypair is stored in the users' home directory
+on HDFS, using the Hadoop Credentials API.
+* Initially, the entire registry tree will be world readable.
+
+
+What are the limitations of such a scheme?
+
+1. It is critical that the user-specific registry keypair is kept a secret.
+This relies on filesystem security to keep the file readable only
+ by the (authenticated) user.
+1. As the [ZK Documentation 
says](http://zookeeper.apache.org/doc/r3.4.6/zookeeperProgrammers.html#sc_ZooKeeperAccessControl),
+*" Authentication is done by sending the username:password in clear text"
+1. While it is possible to change the password for an account,
+this involves a recursive walk down the registry tree, and will stop all
+running services from being able to authenticate for write access until they
+reload the key.
+1. A world-readable registry tree is exposing information about the cluster.
+There is some mitigation here in that access may be restricted by IP Address.
+1. There's also the need to propagate information from the registry down to
+the clients for setting up ACLs.
+
+
+
+## ACL Configuration propagation
+
+The registry manager cannot rely on clients consistently setting
+ZK permissions. At the very least, they cannot relay on client applications
+unintentionally wrong values for the accounts of the system services
+
+*Solution*: Initially, a registry permission is used here.
+
+### Automatic domain extension
+
+In a kerberos domain, it is possible for a kerberized client to determine the
+realm of a cluster at run time from the local
+user's kerberos credentials as used to talk to YARN or HDFS.
+
+This can be used to auto-generate account names with the correct realm for the
+system accounts hence aid having valid constants.
+
+This allows the registry to support a default configuration value for
+`hadoop.registry.system.accounts` of:
+
+      "sasl:yarn@, sasl:mapred@, sasl:hdfs@, sasl:hadoop@";
+
+#### In-registry publishing of core binding data
+
+Another strategy could be to have a `ServiceRecord` at the root
+of the registry that actually defines the registry —including listing
+those default binding values in the `data` field..
+
+### Auditing
+
+Something (perhaps the RM) could scan a user's portion of the registry and
+detect some ACL problems: IP/world access too lax, admin account settings 
wrong.
+It cannot view or fix the ACL permissions unless it has the `ADMIN` permission,
+though that situation can at least be detected. Given the RM must have `DELETE`
+permissions further up the stack, it would be in a position to delete the 
errant
+part of the tree —though this could be a destructive overreaction.
+
+## Further Reading
+
+* [Zookeeper and 
SASL](https://cwiki.apache.org/confluence/display/ZOOKEEPER/Zookeeper+and+SASL)
+* [Up and Running with Secure 
Zookeeper](https://github.com/ekoontz/zookeeper/wiki)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2a9fa84/hadoop-common-project/hadoop-common/src/site/markdown/registry/using-the-hadoop-service-registry.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/registry/using-the-hadoop-service-registry.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/registry/using-the-hadoop-service-registry.md
new file mode 100644
index 0000000..e467eb1
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/registry/using-the-hadoop-service-registry.md
@@ -0,0 +1,273 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+# Using the Hadoop Service Registry
+
+The Hadoop service registry can be used in a number of ways :-
+
+1. To register dynamic YARN-deployed applications with entries that match the
+   lifespan of the YARN application.
+   Service Records can be set to be deleted on
+   the completion of the YARN application, the application attempt,
+   or an individual container.
+1. To look up static or dynamic applications and the mechanisms to communicate
+   with them.
+   Those mechanisms can include: HTTP(S) URLs, Zookeeper paths,
+   hostnames and ports and even paths in a Hadoop filesystem to
+   configuration data.
+1. On a secure cluster, to verify that a service binding has been published
+   by a specific user, or a system account.
+   This can be done simply by looking at the path under which an entry has
+   been placed.
+1. To register static applications.
+   These will remain in the registry until deleted.
+    They can be updated as required.
+
+A user of the registry may be both a publisher of entries —Service Records—
+and a consumer of other services located via their service records.
+Different parts of a distributed application may also use it for different
+purposes. As an example, the Application Master of a YARN application
+can publish bindings for use by its worker containers. The code running in the 
containers
+which can then look up the bindings to communicate with that manager even
+if it was restarted on different nodes in the cluster. Client applications can
+look up external service endpoints to interact with the AM via a public API.
+
+The registry cannot be used:-
+
+* To subscribe to service records or registry paths and listen for changes.
+* To directly share arbitrary data from a server for their clients.
+  Such data must be published by some other means, a means which the registry
+  entry can publish.
+* To share secrets between processes. The registry is world readable.
+
+
+## Registry Application Design Patterns
+
+
+### Short-lived YARN Application Masters registering their public service 
endpoints.
+
+1. A YARN application is deployed. In a secure cluster, it is given the 
kerberos
+   token to write to the registry.
+2. When launched, it creates a service record at a known path
+3. This record MAY have application attempt persistence policy of and an ID of
+   the application attempt
+
+               yarn:persistence = "application_attempt"
+               yarn:id = ${application_attemptId}
+
+        This means that the record will be deleted when the application attempt
+        completes, even if a new attempt is created. Every Application attempt 
will have to re-register the endpoint —which may be needed to locate the 
service anyway.
+4. Alternatively, the record MAY have the persistence policy of "application":
+
+               yarn:persistence = "application_attempt"
+               yarn:id = application_attemptId
+       This means that the record will persist even between application 
attempts, albeit with out of date endpoint information.
+5. Client applications look up the service by way of the path.
+
+The choice of path is an application specific one.
+For services with a YARN application name guaranteed to be unique,
+we recommend a convention of:
+
+       /users/${username}/applications/${service-class}/${instance-name}
+
+Alternatively, the application Id can be used in the path:
+
+       /users/${username}/applications/${service-class}/${applicationId}
+
+The latter makes mapping a YARN application listing entry to a service record 
trivial.
+
+Client applications may locate the service
+
+* By enumerating all instances of a service class and selecting one by 
specific critera.
+* From a supplied service class and instance name
+* If listed by application ID, from the service class and application ID.
+
+After locating a service record, the client can enumerate the `external`
+bindings and locate the entry with the desired API.
+
+
+### YARN Containers registering their public service endpoints
+
+Here all containers in a YARN application are publishing service endpoints
+for public consumption.
+
+1. The deployed containers are passed the base path under which they should
+   register themselves.
+2. Long-lived containers must be passed an `id:password` pair which gives
+   them the right to update these entries without the kerberos credentials of 
the user. This allows the containers to update their entries even after the 
user tokens granting the AM write access to a registry path expire.
+3. The containers instantiate a registry operations instance with the
+   `id:password` pair.
+4. They then a register service record on a path consisting of:
+
+               ${base-path} + "/" + RegistryPathUtils.encodeYarnID(containerId)
+
+       This record should have the container persistence policy an ID of the 
container
+
+               yarn:persistence = "container"
+               yarn:id = containerId
+
+       When the container is terminated, the entry will be automatically 
deleted.
+
+5. The exported service endpoints of this container-deployed service should
+   be listed in the `external` endpoint list of the service record.
+6. Clients can enumerate all containers exported by a YARN application by
+   listing the entries under `${base-path}`.
+
+
+### Registering Static cluster services.
+
+Services which are generally fixed in a cluster, but which need to publish
+binding and configuration information may be published in the registry.
+Example: an Apache Oozie service.
+Services external to the cluster to which deployed applications may also
+be published. Example: An Amazon Dynamo instance.
+
+
+These services can be registered under paths which belong to the users
+running the service, such as `/users/oozie` or `/users/hbase`.
+Client applications would use this path.
+While this can authenticate the validity of the service record,
+it does rely on the client applications knowing the username a service
+is deployed on, or being configured with the full path.
+
+The alternative is for the services to be deployed under a static services 
path,
+under `/services`. For example, `/services/oozie` could contain
+the registration of the Oozie service.
+As the permissions for this path are restricted to pre-configured
+system accounts, the presence of a service registration on this path on a 
secure
+cluster, confirms that it was registered by the cluster administration tools.
+
+1. The service is deployed by some management tool, or directly by
+   the cluster operator.
+2. The deployed application can register itself under its own user name
+   if given the binding information for the registry.
+3. If the application is to be registered under `/services` and it has been
+   deployed by one of the system user accounts —it may register itself 
directly.
+4. If the application does not have the permissions to do so, the cluster
+   administration tools must register the service instead.
+5. Client applications may locate a service by resolving its well
+   known/configured path.
+6. If a service is stopped, the administration tools may delete the entry,
+   or retain the entry but delete all it service endpoints.
+   This is a proposed convention to indicate
+   "the service is known but not currently reachable".
+7. When a service is restarted, its binding information may be updated,
+   or its entire registry entry recreated.
+
+
+### YARN containers locating their Application Master
+
+Here YARN containers register with their AM to receive work, usually by some
+heartbeat mechanism where they report in regularly.
+If the AM is configured for containers to outlive the application attempt,
+when an AM fails the containers keep running.
+These containers will need to bind to any restarted AM.
+They may also wish to conclude that if an AM does not restart,
+that they should eventually time out and terminate themselves.
+Such a policy helps the application react to network partitions.
+
+1. The YARN AM publishes its service endpoints such as the FQDN and
+   socket port needed for IPC communications, or an HTTP/HTTPS URL needed
+   for a REST channel.
+   These are published in the `internal` endpoint list, with the
+   `api` field set to a URL of the specific API the containers use.
+1. The YARN containers are launched with the path to the service record
+   (somehow) passed to them.
+   Environment variables or command line parameters are two viable mechanisms.
+   Shared secrets should also be passed that way: command line parameters are
+   visible in the unix `ps` command.
+   More secure is saving shared secrets to the cluster filesystem,
+   passing down the path to the containers. The URI to such as path MAY be one
+   of the registered internal endpoints of the application.
+1. The YARN containers look up the service registry to identify the
+   communications binding.
+1. If the registered service entry cannot be found, the container MAY do one 
of:
+   exit. spin with some (jittered) retry period, polling for the entry, until
+   the entry reappears. This implies that the AM has been found.
+1. If the service entry is found, the client should attempt to communicate
+   with the AM on its channel.
+   Shared authentication details may be used to validate the client with the
+   server and vice versa.
+1. The client report in to the AM until the connections start failing to
+   connect or authenticate, or when a long lived connection is broken
+   and cannot be restarted.
+1. A this point the client may revert to step (3).
+   Again, some backoff policy with some jitter helps stop a
+   newly-restarted AM being overloaded.
+   Containers may also with to have some timeout after which they conclude
+   that the AM is not coming back and exit.
+1. We recommend that alongside the functional commands that an AM may
+   issue to a client, a "terminate" command can be issued to a container.
+   This allows the system to handle the specific situation of the
+   YARN Node Manager terminating while spawned containers keep running.
+
+### YARN Applications and containers publishing their management and metrics 
bindings
+
+Management ports and bindings are simply others endpoint to publish.
+These should be published as *internal* endpoints, as they are not
+intended for public consumption.
+
+### Client application enumerating services by endpoint APIs
+
+A client application wishes to locate all services implementing a specific API,
+such as `"classpath://org.apache.hbase"`
+
+1. The client starts from a path in the registry
+1. The client calls `registryOperations.list(path)` to list all nodes directly
+   under that path, getting a relative list of child nodes.
+1. the client enumerates the child record statuses by calling `stat()`
+   on each child.
+1. For all status entries, if the size of the entry is greater than the
+   value of `ServiceRecordHeader.getLength()`, it MAY contain a service record.
+1. The contents can be retrieved using the `resolve()` operation.
+   If successful, it does contain a service record —so the client can 
enumerate
+   the `external` endpoints and locate the one with the desired API.
+1. The `children` field of each `RegistryPathStatus` status entry should
+   be examined. If it is >= 0, the enumeration should be performed recursively 
on the path of that entry.
+1. The operation ultimately completes with a list of all entries.
+1. One of the enumerated endpoints may be selected and used as the binding 
information
+   for a service
+
+This algorithm describes a depth first search of the registry tree.
+Variations are of course possible, including breadth-first search,
+or immediately halting the search as soon as a single entry point.
+There is also the option of parallel searches of different subtrees
+—this may reduce search time, albeit at the price of a higher client
+load on the registry infrastructure.
+
+A utility class `RegistryUtils` provides static utility methods for
+common registry operations,in particular,
+`RegistryUtils.listServiceRecords(registryOperations, path)`
+performs the listing and collection of all immediate child record entries of
+a specified path.
+
+Client applications are left with the problem of "what to do when the endpoint
+is not valid", specifically, when a service is not running —what should be 
done?
+
+Some transports assume that the outage is transient, and that spinning retries
+against the original binding is the correct strategy. This is the default
+policy of the Hadoop IPC client.
+
+Other transports fail fast, immediately reporting the failure via an
+exception or other mechanism. This is directly visible to the client —but
+does allow the client to rescan the registry and rebind to the application.
+
+Finally, some application have been designed for dynamic failover from the
+outset: their published binding information is actually a zookeeper path.
+Apache HBase and Apache Accumulo are examples of this. The registry is used
+for the initial lookup of the binding, after which the clients are inherently
+resilient to failure.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2a9fa84/hadoop-common-project/hadoop-registry/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-registry/pom.xml 
b/hadoop-common-project/hadoop-registry/pom.xml
new file mode 100644
index 0000000..7496db4
--- /dev/null
+++ b/hadoop-common-project/hadoop-registry/pom.xml
@@ -0,0 +1,298 @@
+<?xml version="1.0"?>
+<!--
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0";
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
+                      http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+  <parent>
+    <artifactId>hadoop-project</artifactId>
+    <groupId>org.apache.hadoop</groupId>
+    <version>3.3.0-SNAPSHOT</version>
+  </parent>
+  <modelVersion>4.0.0</modelVersion>
+  <artifactId>hadoop-registry</artifactId>
+  <version>3.3.0-SNAPSHOT</version>
+  <name>Apache Hadoop Registry</name>
+
+  <dependencies>
+
+    <dependency>
+      <groupId>org.slf4j</groupId>
+      <artifactId>slf4j-api</artifactId>
+    </dependency>
+
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-auth</artifactId>
+    </dependency>
+
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-annotations</artifactId>
+    </dependency>
+
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-common</artifactId>
+    </dependency>
+
+    <!-- needed for TimedOutTestsListener -->
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-common</artifactId>
+      <type>test-jar</type>
+      <scope>test</scope>
+    </dependency>
+
+    <!-- Mini KDC is used for testing -->
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-minikdc</artifactId>
+      <scope>test</scope>
+    </dependency>
+
+    <dependency>
+      <groupId>junit</groupId>
+      <artifactId>junit</artifactId>
+      <scope>test</scope>
+    </dependency>
+
+    <dependency>
+      <groupId>org.apache.zookeeper</groupId>
+      <artifactId>zookeeper</artifactId>
+    </dependency>
+
+    <dependency>
+      <groupId>org.apache.curator</groupId>
+      <artifactId>curator-client</artifactId>
+    </dependency>
+
+    <dependency>
+      <groupId>org.apache.curator</groupId>
+      <artifactId>curator-framework</artifactId>
+    </dependency>
+
+    <dependency>
+      <groupId>org.apache.curator</groupId>
+      <artifactId>curator-recipes</artifactId>
+    </dependency>
+
+    <dependency>
+      <groupId>commons-cli</groupId>
+      <artifactId>commons-cli</artifactId>
+    </dependency>
+
+    <dependency>
+      <groupId>commons-daemon</groupId>
+      <artifactId>commons-daemon</artifactId>
+    </dependency>
+
+    <dependency>
+      <groupId>commons-io</groupId>
+      <artifactId>commons-io</artifactId>
+    </dependency>
+
+    <dependency>
+      <groupId>commons-net</groupId>
+      <artifactId>commons-net</artifactId>
+    </dependency>
+
+    <dependency>
+      <groupId>com.fasterxml.jackson.core</groupId>
+      <artifactId>jackson-annotations</artifactId>
+    </dependency>
+
+    <dependency>
+      <groupId>com.fasterxml.jackson.core</groupId>
+      <artifactId>jackson-core</artifactId>
+    </dependency>
+
+    <dependency>
+      <groupId>com.fasterxml.jackson.core</groupId>
+      <artifactId>jackson-databind</artifactId>
+    </dependency>
+
+    <dependency>
+      <groupId>com.google.guava</groupId>
+      <artifactId>guava</artifactId>
+    </dependency>
+
+    <dependency>
+      <groupId>dnsjava</groupId>
+      <artifactId>dnsjava</artifactId>
+    </dependency>
+
+  </dependencies>
+
+  <build>
+    <!--
+    Include all files in src/main/resources.  By default, do not apply property
+    substitution (filtering=false), but do apply property substitution to
+    yarn-version-info.properties (filtering=true).  This will substitute the
+    version information correctly, but prevent Maven from altering other files
+    like yarn-default.xml.
+    -->
+    <resources>
+      <resource>
+        <directory>${basedir}/src/main/resources</directory>
+        <excludes>
+          <exclude>yarn-version-info.properties</exclude>
+        </excludes>
+        <filtering>false</filtering>
+      </resource>
+      <resource>
+        <directory>${basedir}/src/main/resources</directory>
+        <includes>
+          <include>yarn-version-info.properties</include>
+        </includes>
+        <filtering>true</filtering>
+      </resource>
+    </resources>
+    <plugins>
+      <plugin>
+        <groupId>org.apache.rat</groupId>
+        <artifactId>apache-rat-plugin</artifactId>
+        <configuration>
+          <excludes>
+            <exclude>src/main/resources/.keep</exclude>
+          </excludes>
+        </configuration>
+      </plugin>
+      <plugin>
+        <groupId>org.apache.hadoop</groupId>
+        <artifactId>hadoop-maven-plugins</artifactId>
+        <executions>
+          <execution>
+            <id>version-info</id>
+            <phase>generate-resources</phase>
+            <goals>
+              <goal>version-info</goal>
+            </goals>
+            <configuration>
+              <source>
+                <directory>${basedir}/src/main</directory>
+                <includes>
+                  <include>java/**/*.java</include>
+                  <!--
+                  <include>proto/**/*.proto</include>
+                    -->
+                </includes>
+              </source>
+            </configuration>
+          </execution>
+        </executions>
+      </plugin>
+      <plugin>
+        <artifactId>maven-jar-plugin</artifactId>
+        <executions>
+          <execution>
+            <goals>
+              <goal>test-jar</goal>
+            </goals>
+            <phase>test-compile</phase>
+          </execution>
+        </executions>
+      </plugin>
+
+      <plugin>
+      <groupId>org.apache.maven.plugins</groupId>
+      <artifactId>maven-surefire-plugin</artifactId>
+      <configuration>
+        <reuseForks>false</reuseForks>
+        <forkedProcessTimeoutInSeconds>900</forkedProcessTimeoutInSeconds>
+        <argLine>-Xmx1024m -XX:+HeapDumpOnOutOfMemoryError</argLine>
+        <environmentVariables>
+          <!-- HADOOP_HOME required for tests on Windows to find winutils -->
+          <HADOOP_HOME>${hadoop.common.build.dir}</HADOOP_HOME>
+          <!-- configurable option to turn JAAS debugging on during test runs 
-->
+          <HADOOP_JAAS_DEBUG>true</HADOOP_JAAS_DEBUG>
+          
<LD_LIBRARY_PATH>${env.LD_LIBRARY_PATH}:${project.build.directory}/native/target/usr/local/lib:${hadoop.common.build.dir}/native/target/usr/local/lib</LD_LIBRARY_PATH>
+          <MALLOC_ARENA_MAX>4</MALLOC_ARENA_MAX>
+        </environmentVariables>
+        <systemPropertyVariables>
+
+          <hadoop.log.dir>${project.build.directory}/log</hadoop.log.dir>
+          <hadoop.tmp.dir>${project.build.directory}/tmp</hadoop.tmp.dir>
+
+          <!-- TODO: all references in testcases should be updated to this 
default -->
+          <test.build.dir>${test.build.dir}</test.build.dir>
+          <test.build.data>${test.build.data}</test.build.data>
+          <test.build.webapps>${test.build.webapps}</test.build.webapps>
+          <test.cache.data>${test.cache.data}</test.cache.data>
+          <test.build.classes>${test.build.classes}</test.build.classes>
+
+          <java.net.preferIPv4Stack>true</java.net.preferIPv4Stack>
+          
<java.security.krb5.conf>${project.build.directory}/test-classes/krb5.conf</java.security.krb5.conf>
+          <java.security.egd>${java.security.egd}</java.security.egd>
+          
<require.test.libhadoop>${require.test.libhadoop}</require.test.libhadoop>
+        </systemPropertyVariables>
+        <includes>
+          <include>**/Test*.java</include>
+        </includes>
+        <excludes>
+          <exclude>**/${test.exclude}.java</exclude>
+          <exclude>${test.exclude.pattern}</exclude>
+          <exclude>**/Test*$*.java</exclude>
+        </excludes>
+      </configuration>
+    </plugin>
+
+
+    </plugins>
+  </build>
+
+  <profiles>
+    <profile>
+      <id>dist</id>
+      <activation>
+        <activeByDefault>false</activeByDefault>
+      </activation>
+      <build>
+        <plugins>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-assembly-plugin</artifactId>
+            <dependencies>
+              <dependency>
+                <groupId>org.apache.hadoop</groupId>
+                <artifactId>hadoop-assemblies</artifactId>
+                <version>${project.version}</version>
+              </dependency>
+            </dependencies>
+            <executions>
+              <execution>
+                <id>dist</id>
+                <phase>package</phase>
+                <goals>
+                  <goal>single</goal>
+                </goals>
+                <configuration>
+                  <finalName>${project.artifactId}-${project.version}
+                  </finalName>
+                  <appendAssemblyId>false</appendAssemblyId>
+                  <attach>false</attach>
+                  <descriptors>
+                    
<descriptor>../../hadoop-assemblies/src/main/resources/assemblies/hadoop-registry-dist.xml</descriptor>
+                  </descriptors>
+                </configuration>
+              </execution>
+            </executions>
+          </plugin>
+        </plugins>
+      </build>
+    </profile>
+  </profiles>
+
+</project>

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2a9fa84/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/cli/RegistryCli.java
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/cli/RegistryCli.java
 
b/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/cli/RegistryCli.java
new file mode 100644
index 0000000..480ce0e
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/cli/RegistryCli.java
@@ -0,0 +1,497 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.registry.cli;
+
+import static org.apache.hadoop.registry.client.binding.RegistryTypeUtils.*;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.io.PrintStream;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.List;
+import java.util.Map;
+
+import com.google.common.base.Preconditions;
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.OptionBuilder;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.PathNotFoundException;
+import org.apache.hadoop.security.AccessControlException;
+import org.apache.hadoop.service.ServiceOperations;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.hadoop.registry.client.api.BindFlags;
+import org.apache.hadoop.registry.client.api.RegistryOperations;
+import org.apache.hadoop.registry.client.api.RegistryOperationsFactory;
+import 
org.apache.hadoop.registry.client.exceptions.AuthenticationFailedException;
+import org.apache.hadoop.registry.client.exceptions.InvalidPathnameException;
+import org.apache.hadoop.registry.client.exceptions.InvalidRecordException;
+import org.apache.hadoop.registry.client.exceptions.NoPathPermissionsException;
+import org.apache.hadoop.registry.client.exceptions.NoRecordException;
+import org.apache.hadoop.registry.client.types.Endpoint;
+import org.apache.hadoop.registry.client.types.ProtocolTypes;
+import org.apache.hadoop.registry.client.types.ServiceRecord;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Command line for registry operations.
+ */
+public class RegistryCli extends Configured implements Tool, Closeable {
+  private static final Logger LOG =
+      LoggerFactory.getLogger(RegistryCli.class);
+  protected final PrintStream sysout;
+  protected final PrintStream syserr;
+
+
+  private RegistryOperations registry;
+
+  private static final String LS_USAGE = "ls pathName";
+  private static final String RESOLVE_USAGE = "resolve pathName";
+  private static final String BIND_USAGE =
+      "bind -inet  -api apiName -p portNumber -h hostName  pathName" + "\n"
+      + "bind -webui uriString -api apiName  pathName" + "\n"
+      + "bind -rest uriString -api apiName  pathName";
+  private static final String MKNODE_USAGE = "mknode directoryName";
+  private static final String RM_USAGE = "rm pathName";
+  private static final String USAGE =
+      "\n" + LS_USAGE + "\n" + RESOLVE_USAGE + "\n" + BIND_USAGE + "\n" +
+      MKNODE_USAGE + "\n" + RM_USAGE;
+
+
+  public RegistryCli(PrintStream sysout, PrintStream syserr) {
+    Configuration conf = new Configuration();
+    super.setConf(conf);
+    registry = RegistryOperationsFactory.createInstance(conf);
+    registry.start();
+    this.sysout = sysout;
+    this.syserr = syserr;
+  }
+
+  public RegistryCli(RegistryOperations reg,
+      Configuration conf,
+      PrintStream sysout,
+      PrintStream syserr) {
+    super(conf);
+    Preconditions.checkArgument(reg != null, "Null registry");
+    registry = reg;
+    this.sysout = sysout;
+    this.syserr = syserr;
+  }
+
+  @SuppressWarnings("UseOfSystemOutOrSystemErr")
+  public static void main(String[] args) throws Exception {
+    int res = -1;
+    try (RegistryCli cli = new RegistryCli(System.out, System.err)) {
+      res = ToolRunner.run(cli, args);
+    } catch (Exception e) {
+      ExitUtil.terminate(res, e);
+    }
+    ExitUtil.terminate(res);
+  }
+
+  /**
+   * Close the object by stopping the registry.
+   * <p>
+   * <i>Important:</i>
+   * <p>
+   *   After this call is made, no operations may be made of this
+   *   object, <i>or of a YARN registry instance used when constructing
+   *   this object. </i>
+   * @throws IOException
+   */
+  @Override
+  public void close() throws IOException {
+    ServiceOperations.stopQuietly(registry);
+    registry = null;
+  }
+
+  private int usageError(String err, String usage) {
+    syserr.println("Error: " + err);
+    syserr.println("Usage: " + usage);
+    return -1;
+  }
+
+  private boolean validatePath(String path) {
+    if (!path.startsWith("/")) {
+      syserr.println("Path must start with /; given path was: " + path);
+      return false;
+    }
+    return true;
+  }
+
+  @Override
+  public int run(String[] args) throws Exception {
+    Preconditions.checkArgument(getConf() != null, "null configuration");
+    if (args.length > 0) {
+      switch (args[0]) {
+        case "ls":
+          return ls(args);
+        case "resolve":
+          return resolve(args);
+        case "bind":
+          return bind(args);
+        case "mknode":
+          return mknode(args);
+        case "rm":
+          return rm(args);
+        default:
+          return usageError("Invalid command: " + args[0], USAGE);
+      }
+    }
+    return usageError("No command arg passed.", USAGE);
+  }
+
+  @SuppressWarnings("unchecked")
+  public int ls(String[] args) {
+
+    Options lsOption = new Options();
+    CommandLineParser parser = new GnuParser();
+    try {
+      CommandLine line = parser.parse(lsOption, args);
+
+      List<String> argsList = line.getArgList();
+      if (argsList.size() != 2) {
+        return usageError("ls requires exactly one path argument", LS_USAGE);
+      }
+      if (!validatePath(argsList.get(1))) {
+        return -1;
+      }
+
+      try {
+        List<String> children = registry.list(argsList.get(1));
+        for (String child : children) {
+          sysout.println(child);
+        }
+        return 0;
+
+      } catch (Exception e) {
+        syserr.println(analyzeException("ls", e, argsList));
+      }
+      return -1;
+    } catch (ParseException exp) {
+      return usageError("Invalid syntax " + exp, LS_USAGE);
+    }
+  }
+
+  @SuppressWarnings("unchecked")
+  public int resolve(String[] args) {
+    Options resolveOption = new Options();
+    CommandLineParser parser = new GnuParser();
+    try {
+      CommandLine line = parser.parse(resolveOption, args);
+
+      List<String> argsList = line.getArgList();
+      if (argsList.size() != 2) {
+        return usageError("resolve requires exactly one path argument",
+            RESOLVE_USAGE);
+      }
+      if (!validatePath(argsList.get(1))) {
+        return -1;
+      }
+
+      try {
+        ServiceRecord record = registry.resolve(argsList.get(1));
+
+        for (Endpoint endpoint : record.external) {
+          sysout.println(" Endpoint(ProtocolType="
+                         + endpoint.protocolType + ", Api="
+                         + endpoint.api + ");"
+                         + " Addresses(AddressType="
+                         + endpoint.addressType + ") are: ");
+
+          for (Map<String, String> address : endpoint.addresses) {
+            sysout.println("[ ");
+            for (Map.Entry<String, String> entry : address.entrySet()) {
+              sysout.print("\t" + entry.getKey()
+                             + ":" + entry.getValue());
+            }
+
+            sysout.println("\n]");
+          }
+          sysout.println();
+        }
+        return 0;
+      } catch (Exception e) {
+        syserr.println(analyzeException("resolve", e, argsList));
+      }
+      return -1;
+    } catch (ParseException exp) {
+      return usageError("Invalid syntax " + exp, RESOLVE_USAGE);
+    }
+
+  }
+
+  public int bind(String[] args) {
+    Option rest = OptionBuilder.withArgName("rest")
+                               .hasArg()
+                               .withDescription("rest Option")
+                               .create("rest");
+    Option webui = OptionBuilder.withArgName("webui")
+                                .hasArg()
+                                .withDescription("webui Option")
+                                .create("webui");
+    Option inet = OptionBuilder.withArgName("inet")
+                               .withDescription("inet Option")
+                               .create("inet");
+    Option port = OptionBuilder.withArgName("port")
+                               .hasArg()
+                               .withDescription("port to listen on [9999]")
+                               .create("p");
+    Option host = OptionBuilder.withArgName("host")
+                               .hasArg()
+                               .withDescription("host name")
+                               .create("h");
+    Option apiOpt = OptionBuilder.withArgName("api")
+                                 .hasArg()
+                                 .withDescription("api")
+                                 .create("api");
+    Options inetOption = new Options();
+    inetOption.addOption(inet);
+    inetOption.addOption(port);
+    inetOption.addOption(host);
+    inetOption.addOption(apiOpt);
+
+    Options webuiOpt = new Options();
+    webuiOpt.addOption(webui);
+    webuiOpt.addOption(apiOpt);
+
+    Options restOpt = new Options();
+    restOpt.addOption(rest);
+    restOpt.addOption(apiOpt);
+
+
+    CommandLineParser parser = new GnuParser();
+    ServiceRecord sr = new ServiceRecord();
+    CommandLine line;
+    if (args.length <= 1) {
+      return usageError("Invalid syntax ", BIND_USAGE);
+    }
+    if (args[1].equals("-inet")) {
+      int portNum;
+      String hostName;
+      String api;
+
+      try {
+        line = parser.parse(inetOption, args);
+      } catch (ParseException exp) {
+        return usageError("Invalid syntax " + exp.getMessage(), BIND_USAGE);
+      }
+      if (line.hasOption("inet") && line.hasOption("p") &&
+          line.hasOption("h") && line.hasOption("api")) {
+        try {
+          portNum = Integer.parseInt(line.getOptionValue("p"));
+        } catch (NumberFormatException exp) {
+          return usageError("Invalid Port - int required" + exp.getMessage(),
+              BIND_USAGE);
+        }
+        hostName = line.getOptionValue("h");
+        api = line.getOptionValue("api");
+        sr.addExternalEndpoint(
+            inetAddrEndpoint(api, ProtocolTypes.PROTOCOL_HADOOP_IPC, hostName,
+                portNum));
+
+      } else {
+        return usageError("Missing options: must have host, port and api",
+            BIND_USAGE);
+      }
+
+    } else if (args[1].equals("-webui")) {
+      try {
+        line = parser.parse(webuiOpt, args);
+      } catch (ParseException exp) {
+        return usageError("Invalid syntax " + exp.getMessage(), BIND_USAGE);
+      }
+      if (line.hasOption("webui") && line.hasOption("api")) {
+        URI theUri;
+        try {
+          theUri = new URI(line.getOptionValue("webui"));
+        } catch (URISyntaxException e) {
+          return usageError("Invalid URI: " + e.getMessage(), BIND_USAGE);
+        }
+        sr.addExternalEndpoint(webEndpoint(line.getOptionValue("api"), 
theUri));
+
+      } else {
+        return usageError("Missing options: must have value for uri and api",
+            BIND_USAGE);
+      }
+    } else if (args[1].equals("-rest")) {
+      try {
+        line = parser.parse(restOpt, args);
+      } catch (ParseException exp) {
+        return usageError("Invalid syntax " + exp.getMessage(), BIND_USAGE);
+      }
+      if (line.hasOption("rest") && line.hasOption("api")) {
+        URI theUri = null;
+        try {
+          theUri = new URI(line.getOptionValue("rest"));
+        } catch (URISyntaxException e) {
+          return usageError("Invalid URI: " + e.getMessage(), BIND_USAGE);
+        }
+        sr.addExternalEndpoint(
+            restEndpoint(line.getOptionValue("api"), theUri));
+
+      } else {
+        return usageError("Missing options: must have value for uri and api",
+            BIND_USAGE);
+      }
+
+    } else {
+      return usageError("Invalid syntax", BIND_USAGE);
+    }
+    @SuppressWarnings("unchecked")
+    List<String> argsList = line.getArgList();
+    if (argsList.size() != 2) {
+      return usageError("bind requires exactly one path argument", BIND_USAGE);
+    }
+    if (!validatePath(argsList.get(1))) {
+      return -1;
+    }
+
+    try {
+      registry.bind(argsList.get(1), sr, BindFlags.OVERWRITE);
+      return 0;
+    } catch (Exception e) {
+      syserr.println(analyzeException("bind", e, argsList));
+    }
+
+    return -1;
+  }
+
+  @SuppressWarnings("unchecked")
+  public int mknode(String[] args) {
+    Options mknodeOption = new Options();
+    CommandLineParser parser = new GnuParser();
+    try {
+      CommandLine line = parser.parse(mknodeOption, args);
+
+      List<String> argsList = line.getArgList();
+      if (argsList.size() != 2) {
+        return usageError("mknode requires exactly one path argument",
+            MKNODE_USAGE);
+      }
+      if (!validatePath(argsList.get(1))) {
+        return -1;
+      }
+
+      try {
+        registry.mknode(args[1], false);
+        return 0;
+      } catch (Exception e) {
+        syserr.println(analyzeException("mknode", e, argsList));
+      }
+      return -1;
+    } catch (ParseException exp) {
+      return usageError("Invalid syntax " + exp.toString(), MKNODE_USAGE);
+    }
+  }
+
+
+  @SuppressWarnings("unchecked")
+  public int rm(String[] args) {
+    Option recursive = OptionBuilder.withArgName("recursive")
+                                    .withDescription("delete recursively")
+                                    .create("r");
+
+    Options rmOption = new Options();
+    rmOption.addOption(recursive);
+
+    boolean recursiveOpt = false;
+
+    CommandLineParser parser = new GnuParser();
+    try {
+      CommandLine line = parser.parse(rmOption, args);
+
+      List<String> argsList = line.getArgList();
+      if (argsList.size() != 2) {
+        return usageError("RM requires exactly one path argument", RM_USAGE);
+      }
+      if (!validatePath(argsList.get(1))) {
+        return -1;
+      }
+
+      try {
+        if (line.hasOption("r")) {
+          recursiveOpt = true;
+        }
+
+        registry.delete(argsList.get(1), recursiveOpt);
+        return 0;
+      } catch (Exception e) {
+        syserr.println(analyzeException("rm", e, argsList));
+      }
+      return -1;
+    } catch (ParseException exp) {
+      return usageError("Invalid syntax " + exp.toString(), RM_USAGE);
+    }
+  }
+
+  /**
+   * Given an exception and a possibly empty argument list, generate
+   * a diagnostics string for use in error messages
+   * @param operation the operation that failed
+   * @param e exception
+   * @param argsList arguments list
+   * @return a string intended for the user
+   */
+  String analyzeException(String operation,
+      Exception e,
+      List<String> argsList) {
+
+    String pathArg = !argsList.isEmpty() ? argsList.get(1) : "(none)";
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("Operation {} on path {} failed with exception {}",
+          operation, pathArg, e, e);
+    }
+    if (e instanceof InvalidPathnameException) {
+      return "InvalidPath :" + pathArg + ": " + e;
+    }
+    if (e instanceof PathNotFoundException) {
+      return "Path not found: " + pathArg;
+    }
+    if (e instanceof NoRecordException) {
+      return "No service record at path " + pathArg;
+    }
+    if (e instanceof AuthenticationFailedException) {
+      return "Failed to authenticate to registry : " + e;
+    }
+    if (e instanceof NoPathPermissionsException) {
+      return "No Permission to path: " + pathArg + ": " + e;
+    }
+    if (e instanceof AccessControlException) {
+      return "No Permission to path: " + pathArg + ": " + e;
+    }
+    if (e instanceof InvalidRecordException) {
+      return "Unable to read record at: " + pathArg + ": " + e;
+    }
+    if (e instanceof IOException) {
+      return "IO Exception when accessing path :" + pathArg + ": " + e;
+    }
+    // something else went very wrong here
+    return "Exception " + e;
+
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2a9fa84/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/BindFlags.java
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/BindFlags.java
 
b/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/BindFlags.java
new file mode 100644
index 0000000..5fd2aef
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/BindFlags.java
@@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.registry.client.api;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+/**
+ * Combinable Flags to use when creating a service entry.
+ */
[email protected]
[email protected]
+public interface BindFlags {
+
+  /**
+   * Create the entry.. This is just "0" and can be "or"ed with anything
+   */
+  int CREATE = 0;
+
+  /**
+   * The entry should be created even if an existing entry is there.
+   */
+  int OVERWRITE = 1;
+
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2a9fa84/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/DNSOperations.java
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/DNSOperations.java
 
b/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/DNSOperations.java
new file mode 100644
index 0000000..3abfb6c
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/DNSOperations.java
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.registry.client.api;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.registry.client.types.ServiceRecord;
+import org.apache.hadoop.service.Service;
+
+import java.io.IOException;
+
+/**
+ * DNS Operations.
+ */
[email protected]
[email protected]
+public interface DNSOperations extends Service {
+
+  /**
+   * Register a service based on a service record.
+   *
+   * @param path the ZK path.
+   * @param record record providing DNS registration info.
+   * @throws IOException Any other IO Exception.
+   */
+  void register(String path, ServiceRecord record)
+      throws IOException;
+
+
+  /**
+   * Delete a service's registered endpoints.
+   *
+   * If the operation returns without an error then the entry has been
+   * deleted.
+   *
+   * @param path the ZK path.
+   * @param record service record
+   * @throws IOException Any other IO Exception
+   *
+   */
+  void delete(String path, ServiceRecord record)
+      throws IOException;
+
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2a9fa84/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/DNSOperationsFactory.java
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/DNSOperationsFactory.java
 
b/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/DNSOperationsFactory.java
new file mode 100644
index 0000000..1a8bb3e
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/DNSOperationsFactory.java
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.registry.client.api;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.registry.server.dns.RegistryDNS;
+
+/**
+ * A factory for DNS operation service instances.
+ */
+public final class DNSOperationsFactory implements RegistryConstants {
+
+  /**
+   * DNS Implementation type.
+   */
+  public enum DNSImplementation {
+    DNSJAVA
+  }
+
+  private DNSOperationsFactory() {
+  }
+
+  /**
+   * Create and initialize a DNS operations instance.
+   *
+   * @param conf configuration
+   * @return a DNS operations instance
+   */
+  public static DNSOperations createInstance(Configuration conf) {
+    return createInstance("DNSOperations", DNSImplementation.DNSJAVA, conf);
+  }
+
+  /**
+   * Create and initialize a registry operations instance.
+   * Access rights will be determined from the configuration.
+   *
+   * @param name name of the instance
+   * @param impl the DNS implementation.
+   * @param conf configuration
+   * @return a registry operations instance
+   */
+  public static DNSOperations createInstance(String name,
+      DNSImplementation impl,
+      Configuration conf) {
+    Preconditions.checkArgument(conf != null, "Null configuration");
+    DNSOperations operations = null;
+    switch (impl) {
+    case DNSJAVA:
+      operations = new RegistryDNS(name);
+      break;
+
+    default:
+      throw new IllegalArgumentException(
+          String.format("%s is not available", impl.toString()));
+    }
+
+    //operations.init(conf);
+    return operations;
+  }
+
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2a9fa84/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryConstants.java
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryConstants.java
 
b/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryConstants.java
new file mode 100644
index 0000000..f9c0fd7
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryConstants.java
@@ -0,0 +1,388 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.registry.client.api;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+/**
+ * Constants for the registry, including configuration keys and default
+ * values.
+ */
[email protected]
[email protected]
+public interface RegistryConstants {
+
+  /**
+   * prefix for registry configuration options: {@value}.
+   */
+  String REGISTRY_PREFIX = "hadoop.registry.";
+
+  /**
+   * Prefix for zookeeper-specific options: {@value}
+   *  <p>
+   * For clients using other protocols, these options are not supported.
+   */
+  String ZK_PREFIX = REGISTRY_PREFIX + "zk.";
+
+  /**
+   * Prefix for dns-specific options: {@value}
+   *  <p>
+   * For clients using other protocols, these options are not supported.
+   */
+  String DNS_PREFIX = REGISTRY_PREFIX + "dns.";
+
+  /**
+   * flag to indicate whether or not the registry should
+   * be enabled in the RM: {@value}.
+   */
+  String KEY_DNS_ENABLED = DNS_PREFIX + "enabled";
+
+  /**
+   * Defaut value for enabling the DNS in the Registry: {@value}.
+   */
+  boolean DEFAULT_DNS_ENABLED = false;
+
+  /**
+   * DNS domain name key.
+   */
+  String KEY_DNS_DOMAIN = DNS_PREFIX + "domain-name";
+
+  /**
+   * Max length of a label (node delimited by a dot in the FQDN).
+   */
+  int MAX_FQDN_LABEL_LENGTH = 63;
+
+  /**
+   * DNS bind address.
+   */
+  String KEY_DNS_BIND_ADDRESS = DNS_PREFIX + "bind-address";
+
+  /**
+   * DNS port number key.
+   */
+  String KEY_DNS_PORT = DNS_PREFIX + "bind-port";
+
+  /**
+   * Default DNS port number.
+   */
+  int DEFAULT_DNS_PORT = 5335;
+
+  /**
+   * DNSSEC Enabled?
+   */
+  String KEY_DNSSEC_ENABLED = DNS_PREFIX + "dnssec.enabled";
+
+  /**
+   * DNSSEC Enabled?
+   */
+  String KEY_DNSSEC_PUBLIC_KEY = DNS_PREFIX + "public-key";
+
+  /**
+   * DNSSEC private key file.
+   */
+  String KEY_DNSSEC_PRIVATE_KEY_FILE = DNS_PREFIX + "private-key-file";
+
+  /**
+   * Default DNSSEC private key file path.
+   */
+  String DEFAULT_DNSSEC_PRIVATE_KEY_FILE =
+      "/etc/hadoop/conf/registryDNS.private";
+
+  /**
+   * Zone subnet.
+   */
+  String KEY_DNS_ZONE_SUBNET = DNS_PREFIX + "zone-subnet";
+
+  /**
+   * Zone subnet mask.
+   */
+  String KEY_DNS_ZONE_MASK = DNS_PREFIX + "zone-mask";
+
+  /**
+   * Zone subnet IP min.
+   */
+  String KEY_DNS_ZONE_IP_MIN = DNS_PREFIX + "zone-ip-min";
+
+  /**
+   * Zone subnet IP max.
+   */
+  String KEY_DNS_ZONE_IP_MAX = DNS_PREFIX + "zone-ip-max";
+
+  /**
+   * DNS Record TTL.
+   */
+  String KEY_DNS_TTL = DNS_PREFIX + "dns-ttl";
+
+  /**
+   * DNS Record TTL.
+   */
+  String KEY_DNS_ZONES_DIR = DNS_PREFIX + "zones-dir";
+
+  /**
+   * Split Reverse Zone.
+   * It may be necessary to spit large reverse zone subnets
+   * into multiple zones to handle existing hosts collocated
+   * with containers.
+   */
+  String KEY_DNS_SPLIT_REVERSE_ZONE = DNS_PREFIX + "split-reverse-zone";
+
+  /**
+   * Default value for splitting the reverse zone.
+   */
+  boolean DEFAULT_DNS_SPLIT_REVERSE_ZONE = false;
+
+  /**
+   * Split Reverse Zone IP Range.
+   * How many IPs should be part of each reverse zone split
+   */
+  String KEY_DNS_SPLIT_REVERSE_ZONE_RANGE = DNS_PREFIX +
+      "split-reverse-zone-range";
+
+  /**
+   * Key to set if the registry is secure: {@value}.
+   * Turning it on changes the permissions policy from "open access"
+   * to restrictions on kerberos with the option of
+   * a user adding one or more auth key pairs down their
+   * own tree.
+   */
+  String KEY_REGISTRY_SECURE = REGISTRY_PREFIX + "secure";
+
+  /**
+   * Default registry security policy: {@value}.
+   */
+  boolean DEFAULT_REGISTRY_SECURE = false;
+
+  /**
+   * Root path in the ZK tree for the registry: {@value}.
+   */
+  String KEY_REGISTRY_ZK_ROOT = ZK_PREFIX + "root";
+
+  /**
+   * Default root of the Hadoop registry: {@value}.
+   */
+  String DEFAULT_ZK_REGISTRY_ROOT = "/registry";
+
+  /**
+   * Registry client authentication policy.
+   *  <p>
+   * This is only used in secure clusters.
+   *  <p>
+   * If the Factory methods of {@link RegistryOperationsFactory}
+   * are used, this key does not need to be set: it is set
+   * up based on the factory method used.
+   */
+  String KEY_REGISTRY_CLIENT_AUTH =
+      REGISTRY_PREFIX + "client.auth";
+
+  /**
+   * Registry client uses Kerberos: authentication is automatic from
+   * logged in user.
+   */
+  String REGISTRY_CLIENT_AUTH_KERBEROS = "kerberos";
+
+  /**
+   * Username/password is the authentication mechanism.
+   * If set then both {@link #KEY_REGISTRY_CLIENT_AUTHENTICATION_ID}
+   * and {@link #KEY_REGISTRY_CLIENT_AUTHENTICATION_PASSWORD} must be set.
+   */
+  String REGISTRY_CLIENT_AUTH_DIGEST = "digest";
+
+  /**
+   * No authentication; client is anonymous.
+   */
+  String REGISTRY_CLIENT_AUTH_ANONYMOUS = "";
+  String REGISTRY_CLIENT_AUTH_SIMPLE = "simple";
+
+  /**
+   * Registry client authentication ID.
+   * <p>
+   * This is only used in secure clusters with
+   * {@link #KEY_REGISTRY_CLIENT_AUTH} set to
+   * {@link #REGISTRY_CLIENT_AUTH_DIGEST}
+   *
+   */
+  String KEY_REGISTRY_CLIENT_AUTHENTICATION_ID =
+      KEY_REGISTRY_CLIENT_AUTH + ".id";
+
+  /**
+   * Registry client authentication password.
+   * <p>
+   * This is only used in secure clusters with the client set to
+   * use digest (not SASL or anonymouse) authentication.
+   *  <p>
+   * Specifically, {@link #KEY_REGISTRY_CLIENT_AUTH} set to
+   * {@link #REGISTRY_CLIENT_AUTH_DIGEST}
+   *
+   */
+  String KEY_REGISTRY_CLIENT_AUTHENTICATION_PASSWORD =
+      KEY_REGISTRY_CLIENT_AUTH + ".password";
+
+  /**
+   * List of hostname:port pairs defining the
+   * zookeeper quorum binding for the registry {@value}.
+   */
+  String KEY_REGISTRY_ZK_QUORUM = ZK_PREFIX + "quorum";
+
+  /**
+   * The default zookeeper quorum binding for the registry: {@value}.
+   */
+  String DEFAULT_REGISTRY_ZK_QUORUM = "localhost:2181";
+
+  /**
+   * Zookeeper session timeout in milliseconds: {@value}.
+   */
+  String KEY_REGISTRY_ZK_SESSION_TIMEOUT =
+      ZK_PREFIX + "session.timeout.ms";
+
+  /**
+  * The default ZK session timeout: {@value}.
+  */
+  int DEFAULT_ZK_SESSION_TIMEOUT = 60000;
+
+  /**
+   * Zookeeper connection timeout in milliseconds: {@value}.
+   */
+  String KEY_REGISTRY_ZK_CONNECTION_TIMEOUT =
+      ZK_PREFIX + "connection.timeout.ms";
+
+  /**
+   * The default ZK connection timeout: {@value}.
+   */
+  int DEFAULT_ZK_CONNECTION_TIMEOUT = 15000;
+
+  /**
+   * Zookeeper connection retry count before failing: {@value}.
+   */
+  String KEY_REGISTRY_ZK_RETRY_TIMES = ZK_PREFIX + "retry.times";
+
+  /**
+   * The default # of times to retry a ZK connection: {@value}.
+   */
+  int DEFAULT_ZK_RETRY_TIMES = 5;
+
+  /**
+   * Zookeeper connect interval in milliseconds: {@value}.
+   */
+  String KEY_REGISTRY_ZK_RETRY_INTERVAL =
+      ZK_PREFIX + "retry.interval.ms";
+
+  /**
+   * The default interval between connection retries: {@value}.
+   */
+  int DEFAULT_ZK_RETRY_INTERVAL = 1000;
+
+  /**
+   * Zookeeper retry limit in milliseconds, during
+   * exponential backoff: {@value}.
+   *
+   * This places a limit even
+   * if the retry times and interval limit, combined
+   * with the backoff policy, result in a long retry
+   * period
+   *
+   */
+  String KEY_REGISTRY_ZK_RETRY_CEILING =
+      ZK_PREFIX + "retry.ceiling.ms";
+
+  /**
+   * Default limit on retries: {@value}.
+   */
+  int DEFAULT_ZK_RETRY_CEILING = 60000;
+
+  /**
+   * A comma separated list of Zookeeper ACL identifiers with
+   * system access to the registry in a secure cluster: {@value}.
+   *
+   * These are given full access to all entries.
+   *
+   * If there is an "@" at the end of an entry it
+   * instructs the registry client to append the kerberos realm as
+   * derived from the login and {@link #KEY_REGISTRY_KERBEROS_REALM}.
+   */
+  String KEY_REGISTRY_SYSTEM_ACCOUNTS = REGISTRY_PREFIX + "system.accounts";
+
+  /**
+   * Default system accounts given global access to the registry: {@value}.
+   */
+  String DEFAULT_REGISTRY_SYSTEM_ACCOUNTS =
+      "sasl:yarn@, sasl:mapred@, sasl:hdfs@, sasl:hadoop@";
+
+  /**
+   * A comma separated list of Zookeeper ACL identifiers with
+   * system access to the registry in a secure cluster: {@value}.
+   *
+   * These are given full access to all entries.
+   *
+   * If there is an "@" at the end of an entry it
+   * instructs the registry client to append the default kerberos domain.
+   */
+  String KEY_REGISTRY_USER_ACCOUNTS = REGISTRY_PREFIX + "user.accounts";
+
+  /**
+   * Default system acls: {@value}.
+   */
+  String DEFAULT_REGISTRY_USER_ACCOUNTS = "";
+
+  /**
+   * The kerberos realm: {@value}.
+   *
+   * This is used to set the realm of
+   * system principals which do not declare their realm,
+   * and any other accounts that need the value.
+   *
+   * If empty, the default realm of the running process
+   * is used.
+   *
+   * If neither are known and the realm is needed, then the registry
+   * service/client will fail.
+   */
+  String KEY_REGISTRY_KERBEROS_REALM = REGISTRY_PREFIX + "kerberos.realm";
+
+  /**
+   * Key to define the JAAS context. Used in secure registries: {@value}.
+   */
+  String KEY_REGISTRY_CLIENT_JAAS_CONTEXT = REGISTRY_PREFIX + "jaas.context";
+
+  /**
+   * default client-side registry JAAS context: {@value}.
+   */
+  String DEFAULT_REGISTRY_CLIENT_JAAS_CONTEXT = "Client";
+
+  /**
+   *  path to users off the root: {@value}.
+   */
+  String PATH_USERS = "/users/";
+
+  /**
+   *  path to system services off the root : {@value}.
+   */
+  String PATH_SYSTEM_SERVICES = "/services/";
+
+  /**
+   *  path to system services under a user's home path : {@value}.
+   */
+  String PATH_USER_SERVICES = "/services/";
+
+  /**
+   *  path under a service record to point to components of that service:
+   *  {@value}.
+   */
+  String SUBPATH_COMPONENTS = "/components/";
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2a9fa84/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryOperations.java
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryOperations.java
 
b/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryOperations.java
new file mode 100644
index 0000000..c51bcf7
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryOperations.java
@@ -0,0 +1,182 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.registry.client.api;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.PathIsNotEmptyDirectoryException;
+import org.apache.hadoop.fs.PathNotFoundException;
+import org.apache.hadoop.service.Service;
+import org.apache.hadoop.registry.client.exceptions.InvalidPathnameException;
+import org.apache.hadoop.registry.client.exceptions.InvalidRecordException;
+import org.apache.hadoop.registry.client.exceptions.NoRecordException;
+import org.apache.hadoop.registry.client.types.RegistryPathStatus;
+import org.apache.hadoop.registry.client.types.ServiceRecord;
+
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Registry Operations
+ */
[email protected]
[email protected]
+public interface RegistryOperations extends Service {
+
+  /**
+   * Create a path.
+   *
+   * It is not an error if the path exists already, be it empty or not.
+   *
+   * The createParents flag also requests creating the parents.
+   * As entries in the registry can hold data while still having
+   * child entries, it is not an error if any of the parent path
+   * elements have service records.
+   *
+   * @param path path to create
+   * @param createParents also create the parents.
+   * @throws PathNotFoundException parent path is not in the registry.
+   * @throws InvalidPathnameException path name is invalid.
+   * @throws IOException Any other IO Exception.
+   * @return true if the path was created, false if it existed.
+   */
+  boolean mknode(String path, boolean createParents)
+      throws PathNotFoundException,
+      InvalidPathnameException,
+      IOException;
+
+  /**
+   * Bind a path in the registry to a service record
+   * @param path path to service record
+   * @param record service record service record to create/update
+   * @param flags bind flags
+   * @throws PathNotFoundException the parent path does not exist
+   * @throws FileAlreadyExistsException path exists but create flags
+   * do not include "overwrite"
+   * @throws InvalidPathnameException path name is invalid.
+   * @throws IOException Any other IO Exception.
+   */
+  void bind(String path, ServiceRecord record, int flags)
+      throws PathNotFoundException,
+      FileAlreadyExistsException,
+      InvalidPathnameException,
+      IOException;
+
+  /**
+   * Resolve the record at a path
+   * @param path path to an entry containing a {@link ServiceRecord}
+   * @return the record
+   * @throws PathNotFoundException path is not in the registry.
+   * @throws NoRecordException if there is not a service record
+   * @throws InvalidRecordException if there was a service record but it could
+   * not be parsed.
+   * @throws IOException Any other IO Exception
+   */
+
+  ServiceRecord resolve(String path)
+      throws PathNotFoundException,
+      NoRecordException,
+      InvalidRecordException,
+      IOException;
+
+  /**
+   * Get the status of a path
+   * @param path path to query
+   * @return the status of the path
+   * @throws PathNotFoundException path is not in the registry.
+   * @throws InvalidPathnameException the path is invalid.
+   * @throws IOException Any other IO Exception
+   */
+  RegistryPathStatus stat(String path)
+      throws PathNotFoundException,
+      InvalidPathnameException,
+      IOException;
+
+  /**
+   * Probe for a path existing.
+   * This is equivalent to {@link #stat(String)} with
+   * any failure downgraded to a
+   * @param path path to query
+   * @return true if the path was found
+   * @throws IOException
+   */
+  boolean exists(String path) throws IOException;
+
+  /**
+   * List all entries under a registry path, returning the relative names
+   * of the entries.
+   * @param path path to query
+   * @return a possibly empty list of the short path names of
+   * child entries.
+   * @throws PathNotFoundException
+   * @throws InvalidPathnameException
+   * @throws IOException
+   */
+   List<String> list(String path) throws
+      PathNotFoundException,
+      InvalidPathnameException,
+      IOException;
+
+  /**
+   * Delete a path.
+   *
+   * If the operation returns without an error then the entry has been
+   * deleted.
+   * @param path path delete recursively
+   * @param recursive recursive flag
+   * @throws PathNotFoundException path is not in the registry.
+   * @throws InvalidPathnameException the path is invalid.
+   * @throws PathIsNotEmptyDirectoryException path has child entries, but
+   * recursive is false.
+   * @throws IOException Any other IO Exception
+   *
+   */
+  void delete(String path, boolean recursive)
+      throws PathNotFoundException,
+      PathIsNotEmptyDirectoryException,
+      InvalidPathnameException,
+      IOException;
+
+  /**
+   * Add a new write access entry to be added to node permissions in all
+   * future write operations of a session connected to a secure registry.
+   *
+   * This does not grant the session any more rights: if it lacked any write
+   * access, it will still be unable to manipulate the registry.
+   *
+   * In an insecure cluster, this operation has no effect.
+   * @param id ID to use
+   * @param pass password
+   * @return true if the accessor was added: that is, the registry connection
+   * uses permissions to manage access
+   * @throws IOException on any failure to build the digest
+   */
+  boolean addWriteAccessor(String id, String pass) throws IOException;
+
+  /**
+   * Clear all write accessors.
+   *
+   * At this point all standard permissions/ACLs are retained,
+   * including any set on behalf of the user
+   * Only  accessors added via {@link #addWriteAccessor(String, String)}
+   * are removed.
+   */
+  public void clearWriteAccessors();
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2a9fa84/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryOperationsFactory.java
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryOperationsFactory.java
 
b/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryOperationsFactory.java
new file mode 100644
index 0000000..5f9c5f3
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryOperationsFactory.java
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.registry.client.api;
+
+import com.google.common.base.Preconditions;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.service.ServiceStateException;
+import org.apache.hadoop.registry.client.impl.RegistryOperationsClient;
+
+import static org.apache.hadoop.registry.client.api.RegistryConstants.*;
+
+/**
+ * A factory for registry operation service instances.
+ * <p>
+ * <i>Each created instance will be returned initialized.</i>
+ * <p>
+ * That is, the service will have had <code>Service.init(conf)</code> applied
+ * to it —possibly after the configuration has been modified to
+ * support the specific binding/security mechanism used
+ */
+public final class RegistryOperationsFactory {
+
+  private RegistryOperationsFactory() {
+  }
+
+  /**
+   * Create and initialize a registry operations instance.
+   * Access writes will be determined from the configuration
+   * @param conf configuration
+   * @return a registry operations instance
+   * @throws ServiceStateException on any failure to initialize
+   */
+  public static RegistryOperations createInstance(Configuration conf) {
+    return createInstance("RegistryOperations", conf);
+  }
+
+  /**
+   * Create and initialize a registry operations instance.
+   * Access rights will be determined from the configuration
+   * @param name name of the instance
+   * @param conf configuration
+   * @return a registry operations instance
+   * @throws ServiceStateException on any failure to initialize
+   */
+  public static RegistryOperations createInstance(String name, Configuration 
conf) {
+    Preconditions.checkArgument(conf != null, "Null configuration");
+    RegistryOperationsClient operations =
+        new RegistryOperationsClient(name);
+    operations.init(conf);
+    return operations;
+  }
+
+  public static RegistryOperationsClient createClient(String name,
+      Configuration conf) {
+    Preconditions.checkArgument(conf != null, "Null configuration");
+    RegistryOperationsClient operations = new RegistryOperationsClient(name);
+    operations.init(conf);
+    return operations;
+  }
+
+  /**
+   * Create and initialize an anonymous read/write registry operations 
instance.
+   * In a secure cluster, this instance will only have read access to the
+   * registry.
+   * @param conf configuration
+   * @return an anonymous registry operations instance
+   *
+   * @throws ServiceStateException on any failure to initialize
+   */
+  public static RegistryOperations createAnonymousInstance(Configuration conf) 
{
+    Preconditions.checkArgument(conf != null, "Null configuration");
+    conf.set(KEY_REGISTRY_CLIENT_AUTH, REGISTRY_CLIENT_AUTH_ANONYMOUS);
+    return createInstance("AnonymousRegistryOperations", conf);
+  }
+
+  /**
+   * Create and initialize an secure, Kerberos-authenticated instance.
+   *
+   * The user identity will be inferred from the current user
+   *
+   * The authentication of this instance will expire when any kerberos
+   * tokens needed to authenticate with the registry infrastructure expire.
+   * @param conf configuration
+   * @param jaasContext the JAAS context of the account.
+   * @return a registry operations instance
+   * @throws ServiceStateException on any failure to initialize
+   */
+  public static RegistryOperations createKerberosInstance(Configuration conf,
+      String jaasContext) {
+    Preconditions.checkArgument(conf != null, "Null configuration");
+    conf.set(KEY_REGISTRY_CLIENT_AUTH, REGISTRY_CLIENT_AUTH_KERBEROS);
+    conf.set(KEY_REGISTRY_CLIENT_JAAS_CONTEXT, jaasContext);
+    return createInstance("KerberosRegistryOperations", conf);
+  }
+
+  /**
+   * Create a kerberos registry service client
+   * @param conf configuration
+   * @param jaasClientEntry the name of the login config entry
+   * @param principal principal of the client.
+   * @param keytab location to the keytab file
+   * @return a registry service client instance
+   */
+  public static RegistryOperations createKerberosInstance(Configuration conf,
+      String jaasClientEntry, String principal, String keytab) {
+    Preconditions.checkArgument(conf != null, "Null configuration");
+    conf.set(KEY_REGISTRY_CLIENT_AUTH, REGISTRY_CLIENT_AUTH_KERBEROS);
+    conf.set(KEY_REGISTRY_CLIENT_JAAS_CONTEXT, jaasClientEntry);
+    RegistryOperationsClient operations =
+        new RegistryOperationsClient("KerberosRegistryOperations");
+    operations.setKerberosPrincipalAndKeytab(principal, keytab);
+    operations.init(conf);
+    return operations;
+  }
+
+
+  /**
+   * Create and initialize an operations instance authenticated with write
+   * access via an <code>id:password</code> pair.
+   *
+   * The instance will have the read access
+   * across the registry, but write access only to that part of the registry
+   * to which it has been give the relevant permissions.
+   * @param conf configuration
+   * @param id user ID
+   * @param password password
+   * @return a registry operations instance
+   * @throws ServiceStateException on any failure to initialize
+   * @throws IllegalArgumentException if an argument is invalid
+   */
+  public static RegistryOperations createAuthenticatedInstance(Configuration 
conf,
+      String id,
+      String password) {
+    Preconditions.checkArgument(!StringUtils.isEmpty(id), "empty Id");
+    Preconditions.checkArgument(!StringUtils.isEmpty(password), "empty 
Password");
+    Preconditions.checkArgument(conf != null, "Null configuration");
+    conf.set(KEY_REGISTRY_CLIENT_AUTH, REGISTRY_CLIENT_AUTH_DIGEST);
+    conf.set(KEY_REGISTRY_CLIENT_AUTHENTICATION_ID, id);
+    conf.set(KEY_REGISTRY_CLIENT_AUTHENTICATION_PASSWORD, password);
+    return createInstance("DigestRegistryOperations", conf);
+  }
+
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2a9fa84/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/package-info.java
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/package-info.java
 
b/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/package-info.java
new file mode 100644
index 0000000..f5f844e
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/api/package-info.java
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/**
+ * YARN Registry Client API.
+ *
+ * This package contains the core API for the YARN registry.
+ *
+ * <ol>
+ *   <li> Data types can be found in
+ * {@link org.apache.hadoop.registry.client.types}</li>
+ *   <li> Exceptions are listed in
+ * {@link org.apache.hadoop.registry.client.exceptions}</li>
+ *   <li> Classes to assist use of the registry are in
+ * {@link org.apache.hadoop.registry.client.binding}</li>
+ * </ol>
+ *
+ *
+ */
+package org.apache.hadoop.registry.client.api;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2a9fa84/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/binding/JsonSerDeser.java
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/binding/JsonSerDeser.java
 
b/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/binding/JsonSerDeser.java
new file mode 100644
index 0000000..04aabfc
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/binding/JsonSerDeser.java
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.registry.client.binding;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.registry.client.exceptions.InvalidRecordException;
+import org.apache.hadoop.registry.client.exceptions.NoRecordException;
+import org.apache.hadoop.util.JsonSerialization;
+
+import java.io.EOFException;
+import java.io.IOException;
+
+/**
+ * Support for marshalling objects to and from JSON.
+ *  <p>
+ * This extends {@link JsonSerialization} with the notion
+ * of a marker field in the JSON file, with
+ * <ol>
+ *   <li>a fail-fast check for it before even trying to parse.</li>
+ *   <li>Specific IOException subclasses for a failure.</li>
+ * </ol>
+ * The rationale for this is not only to support different things in the,
+ * registry, but the fact that all ZK nodes have a size &gt; 0 when examined.
+ *
+ * @param <T> Type to marshal.
+ */
[email protected]
[email protected]
+public class JsonSerDeser<T> extends JsonSerialization<T> {
+
+  private static final String UTF_8 = "UTF-8";
+  public static final String E_NO_DATA = "No data at path";
+  public static final String E_DATA_TOO_SHORT = "Data at path too short";
+  public static final String E_MISSING_MARKER_STRING =
+      "Missing marker string: ";
+
+  /**
+   * Create an instance bound to a specific type
+   * @param classType class to marshall
+   */
+  public JsonSerDeser(Class<T> classType) {
+    super(classType, false, false);
+  }
+
+  /**
+   * Deserialize from a byte array
+   * @param path path the data came from
+   * @param bytes byte array
+   * @throws IOException all problems
+   * @throws EOFException not enough data
+   * @throws InvalidRecordException if the parsing failed -the record is 
invalid
+   * @throws NoRecordException if the data is not considered a record: either
+   * it is too short or it did not contain the marker string.
+   */
+  public T fromBytes(String path, byte[] bytes) throws IOException {
+    return fromBytes(path, bytes, "");
+  }
+
+  /**
+   * Deserialize from a byte array, optionally checking for a marker string.
+   * <p>
+   * If the marker parameter is supplied (and not empty), then its presence
+   * will be verified before the JSON parsing takes place; it is a fast-fail
+   * check. If not found, an {@link InvalidRecordException} exception will be
+   * raised
+   * @param path path the data came from
+   * @param bytes byte array
+   * @param marker an optional string which, if set, MUST be present in the
+   * UTF-8 parsed payload.
+   * @return The parsed record
+   * @throws IOException all problems
+   * @throws EOFException not enough data
+   * @throws InvalidRecordException if the JSON parsing failed.
+   * @throws NoRecordException if the data is not considered a record: either
+   * it is too short or it did not contain the marker string.
+   */
+  public T fromBytes(String path, byte[] bytes, String marker)
+      throws IOException {
+    int len = bytes.length;
+    if (len == 0 ) {
+      throw new NoRecordException(path, E_NO_DATA);
+    }
+    if (StringUtils.isNotEmpty(marker) && len < marker.length()) {
+      throw new NoRecordException(path, E_DATA_TOO_SHORT);
+    }
+    String json = new String(bytes, 0, len, UTF_8);
+    if (StringUtils.isNotEmpty(marker)
+        && !json.contains(marker)) {
+      throw new NoRecordException(path, E_MISSING_MARKER_STRING + marker);
+    }
+    try {
+      return fromJson(json);
+    } catch (JsonProcessingException e) {
+      throw new InvalidRecordException(path, e.toString(), e);
+    }
+  }
+
+}


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to