http://git-wip-us.apache.org/repos/asf/hadoop/blob/ff02bdfe/hadoop-common-project/hadoop-common/src/site/markdown/release/3.0.0-alpha2/RELEASENOTES.3.0.0-alpha2.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/3.0.0-alpha2/RELEASENOTES.3.0.0-alpha2.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/3.0.0-alpha2/RELEASENOTES.3.0.0-alpha2.md
new file mode 100644
index 0000000..843ce07
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/3.0.0-alpha2/RELEASENOTES.3.0.0-alpha2.md
@@ -0,0 +1,618 @@
+
+<!---
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+-->
+# "Apache Hadoop"  3.0.0-alpha2 Release Notes
+
+These release notes cover new developer and user-facing incompatibilities, 
important issues, features, and major improvements.
+
+
+---
+
+* [HADOOP-12756](https://issues.apache.org/jira/browse/HADOOP-12756) | *Major* 
| **Incorporate Aliyun OSS file system implementation**
+
+Aliyun OSS is widely used among China’s cloud users and this work 
implemented a new Hadoop compatible filesystem AliyunOSSFileSystem with oss 
scheme, similar to the s3a and azure support.
+
+
+---
+
+* [HDFS-10760](https://issues.apache.org/jira/browse/HDFS-10760) | *Major* | 
**DataXceiver#run() should not log InvalidToken exception as an error**
+
+Log InvalidTokenException at trace level in DataXceiver#run().
+
+
+---
+
+* [HADOOP-13361](https://issues.apache.org/jira/browse/HADOOP-13361) | *Major* 
| **Modify hadoop\_verify\_user to be consistent with hadoop\_subcommand\_opts 
(ie more granularity)**
+
+Users:
+
+In Apache Hadoop 3.0.0-alpha1, verification required environment variables 
with the format of HADOOP\_(subcommand)\_USER where subcommand was lowercase 
applied globally.  This changes the format to be (command)\_(subcommand)\_USER 
where all are uppercase to be consistent with the \_OPTS functionality as well 
as being able to set per-command options.  Additionally, the check is now 
happening sooner, which should make it faster to fail.
+
+Developers:
+
+This changes hadoop\_verify\_user to require the program's name as part of the 
function call.  This is incompatible with Apache Hadoop 3.0.0-alpha1.
+
+
+---
+
+* [YARN-5549](https://issues.apache.org/jira/browse/YARN-5549) | *Critical* | 
**AMLauncher#createAMContainerLaunchContext() should not log the command to be 
launched indiscriminately**
+
+Introduces a new configuration property, 
yarn.resourcemanager.amlauncher.log.command.  If this property is set to true, 
then the AM command being launched will be masked in the RM log.
+
+
+---
+
+* [HDFS-6962](https://issues.apache.org/jira/browse/HDFS-6962) | *Critical* | 
**ACL inheritance conflicts with umaskmode**
+
+The original implementation of HDFS ACLs applied the client's umask to the 
permissions when inheriting a default ACL defined on a parent directory.  This 
behavior is a deviation from the POSIX ACL specification, which states that the 
umask has no influence when a default ACL propagates from parent to child.  
HDFS now offers the capability to ignore the umask in this case for improved 
compliance with POSIX.  This change is considered backward-incompatible, so the 
new behavior is off by default and must be explicitly configured by setting 
dfs.namenode.posix.acl.inheritance.enabled to true in hdfs-site.xml.  Please 
see the HDFS Permissions Guide for further details.
+
+
+---
+
+* [HADOOP-13341](https://issues.apache.org/jira/browse/HADOOP-13341) | *Major* 
| **Deprecate HADOOP\_SERVERNAME\_OPTS; replace with 
(command)\_(subcommand)\_OPTS**
+
+<!-- markdown -->
+Users:
+* Ability to set per-command+sub-command options from the command line.
+* Makes daemon environment variable options consistent across the project. 
(See deprecation list below)
+* HADOOP\_CLIENT\_OPTS is now honored for every non-daemon sub-command. Prior 
to this change, many sub-commands did not use it.
+
+Developers:
+* No longer need to do custom handling for options in the case section of the 
shell scripts.
+* Consolidates all \_OPTS handling into hadoop-functions.sh to enable future 
projects.
+* All daemons running with secure mode features now get \_SECURE\_EXTRA\_OPTS 
support.
+
+\_OPTS Changes:
+
+| Old | New |
+|:---- |:---- |
+| HADOOP\_BALANCER\_OPTS | HDFS\_BALANCER\_OPTS | 
+| HADOOP\_DATANODE\_OPTS | HDFS\_DATANODE\_OPTS | 
+| HADOOP\_DN\_SECURE_EXTRA_OPTS | HDFS\_DATANODE\_SECURE\_EXTRA\_OPTS | 
+| HADOOP\_JOB\_HISTORYSERVER\_OPTS | MAPRED\_HISTORYSERVER\_OPTS | 
+| HADOOP\_JOURNALNODE\_OPTS | HDFS\_JOURNALNODE\_OPTS | 
+| HADOOP\_MOVER\_OPTS | HDFS\_MOVER\_OPTS | 
+| HADOOP\_NAMENODE\_OPTS | HDFS\_NAMENODE\_OPTS | 
+| HADOOP\_NFS3\_OPTS | HDFS\_NFS3\_OPTS | 
+| HADOOP\_NFS3\_SECURE\_EXTRA\_OPTS | HDFS\_NFS3\_SECURE\_EXTRA\_OPTS | | 
HADOOP\_PORTMAP\_OPTS | HDFS\_PORTMAP\_OPTS | 
+| HADOOP\_SECONDARYNAMENODE\_OPTS | 
+HDFS\_SECONDARYNAMENODE\_OPTS | 
+| HADOOP\_ZKFC\_OPTS | HDFS\_ZKFC\_OPTS |
+
+
+---
+
+* [HADOOP-13588](https://issues.apache.org/jira/browse/HADOOP-13588) | *Major* 
| **ConfServlet should respect Accept request header**
+
+Conf HTTP service should set response's content type according to the Accept 
header in the request.
+
+
+---
+
+* [HDFS-10636](https://issues.apache.org/jira/browse/HDFS-10636) | *Major* | 
**Modify ReplicaInfo to remove the assumption that replica metadata and data 
are stored in java.io.File.**
+
+**WARNING: No release note provided for this change.**
+
+
+---
+
+* [HADOOP-13218](https://issues.apache.org/jira/browse/HADOOP-13218) | *Major* 
| **Migrate other Hadoop side tests to prepare for removing WritableRPCEngine**
+
+**WARNING: No release note provided for this change.**
+
+
+---
+
+* [HDFS-10489](https://issues.apache.org/jira/browse/HDFS-10489) | *Minor* | 
**Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones**
+
+The configuration dfs.encryption.key.provider.uri is deprecated. To configure 
key provider in HDFS, please use hadoop.security.key.provider.path.
+
+
+---
+
+* [HDFS-10877](https://issues.apache.org/jira/browse/HDFS-10877) | *Major* | 
**Make RemoteEditLogManifest.committedTxnId optional in Protocol Buffers**
+
+A new protobuf field added to RemoteEditLogManifest was mistakenly marked as 
required. This changes the field to optional, preserving compatibility with 2.x 
releases but breaking compatibility with 3.0.0-alpha1.
+
+
+---
+
+* [HDFS-10914](https://issues.apache.org/jira/browse/HDFS-10914) | *Critical* 
| **Move remnants of oah.hdfs.client to hadoop-hdfs-client**
+
+The remaining classes in the org.apache.hadoop.hdfs.client package have been 
moved from hadoop-hdfs to hadoop-hdfs-client.
+
+
+---
+
+* [HADOOP-13681](https://issues.apache.org/jira/browse/HADOOP-13681) | *Major* 
| **Reduce Kafka dependencies in hadoop-kafka module**
+
+Changed Apache Kafka dependency from kafka-2.10 to kafka-clients in 
hadoop-kafka module.
+
+
+---
+
+* [HADOOP-12667](https://issues.apache.org/jira/browse/HADOOP-12667) | *Major* 
| **s3a: Support createNonRecursive API**
+
+S3A now provides a working implementation of the FileSystem#createNonRecursive 
method.
+
+
+---
+
+* [HDFS-10609](https://issues.apache.org/jira/browse/HDFS-10609) | *Major* | 
**Uncaught InvalidEncryptionKeyException during pipeline recovery may abort 
downstream applications**
+
+If pipeline recovery fails due to expired encryption key, attempt to refresh 
the key and retry.
+
+
+---
+
+* [HADOOP-13678](https://issues.apache.org/jira/browse/HADOOP-13678) | *Major* 
| **Update jackson from 1.9.13 to 2.x in hadoop-tools**
+
+Jackson 1.9.13 dependency was removed from hadoop-tools module.
+
+
+---
+
+* [MAPREDUCE-6776](https://issues.apache.org/jira/browse/MAPREDUCE-6776) | 
*Major* | **yarn.app.mapreduce.client.job.max-retries should have a more useful 
default**
+
+The default value of yarn.app.mapreduce.client.job.max-retries has been 
changed from 0 to 3.  This will help protect clients from failures that are 
transient.  True failures may take slightly longer now due to the retries.
+
+
+---
+
+* [HDFS-10797](https://issues.apache.org/jira/browse/HDFS-10797) | *Major* | 
**Disk usage summary of snapshots causes renamed blocks to get counted twice**
+
+Disk usage summaries previously incorrectly counted files twice if they had 
been renamed (including files moved to Trash) since being snapshotted. 
Summaries now include current data plus snapshotted data that is no longer 
under the directory either due to deletion or being moved outside of the 
directory.
+
+
+---
+
+* [HADOOP-13699](https://issues.apache.org/jira/browse/HADOOP-13699) | 
*Critical* | **Configuration does not substitute multiple references to the 
same var**
+
+This changes the config var cycle detection introduced in 3.0.0-alpha1 by 
HADOOP-6871 such that it detects single-variable but not multi-variable loops. 
This also fixes resolution of multiple specifications of the same variable in a 
config value.
+
+
+---
+
+* [HDFS-10637](https://issues.apache.org/jira/browse/HDFS-10637) | *Major* | 
**Modifications to remove the assumption that FsVolumes are backed by 
java.io.File.**
+
+**WARNING: No release note provided for this change.**
+
+
+---
+
+* [HDFS-10916](https://issues.apache.org/jira/browse/HDFS-10916) | *Major* | 
**Switch from "raw" to "system" xattr namespace for erasure coding policy**
+
+EC policy is now stored in the "system" extended attribute namespace rather 
than "raw". This means the EC policy extended attribute is no longer directly 
accessible by users or preserved across a distcp that preserves raw extended 
attributes.
+
+Users can instead use HdfsAdmin#setErasureCodingPolicy and 
HdfsAdmin#getErasureCodingPolicy to set and get the EC policy for a path.
+
+
+---
+
+* [YARN-4464](https://issues.apache.org/jira/browse/YARN-4464) | *Blocker* | 
**Lower the default max applications stored in the RM and store**
+
+The maximum applications the RM stores in memory and in the state-store by 
default has been lowered from 10,000 to 1,000. This should ease the pressure on 
the state-store. However, installations relying on the default to be 10,000 are 
affected.
+
+
+---
+
+* [HDFS-10883](https://issues.apache.org/jira/browse/HDFS-10883) | *Major* | 
**`getTrashRoot`'s behavior is not consistent in DFS after enabling EZ.**
+
+If root path / is an encryption zone, the old 
DistributedFileSystem#getTrashRoot(new Path("/")) returns
+/user/$USER/.Trash
+which is a wrong behavior. The correct value should be
+/.Trash/$USER
+
+
+---
+
+* [HADOOP-13721](https://issues.apache.org/jira/browse/HADOOP-13721) | *Minor* 
| **Remove stale method ViewFileSystem#getTrashCanLocation**
+
+The unused method getTrashCanLocation has been removed. This method has long 
been superceded by FileSystem#getTrashRoot.
+
+
+---
+
+* [HADOOP-13661](https://issues.apache.org/jira/browse/HADOOP-13661) | *Major* 
| **Upgrade HTrace version**
+
+Bump HTrace version from 4.0.1-incubating to 4.1.0-incubating.
+
+
+---
+
+* [HDFS-10957](https://issues.apache.org/jira/browse/HDFS-10957) | *Major* | 
**Retire BKJM from trunk**
+
+The BookkeeperJournalManager implementation has been removed. Users are 
encouraged to use QuorumJournalManager instead.
+
+
+---
+
+* [HADOOP-13522](https://issues.apache.org/jira/browse/HADOOP-13522) | *Major* 
| **Add %A and %a formats for fs -stat command to print permissions**
+
+Added permissions to the fs stat command. They are now available as symbolic 
(%A) and octal (%a) formats, which are in line with Linux.
+
+
+---
+
+* [HADOOP-13560](https://issues.apache.org/jira/browse/HADOOP-13560) | *Major* 
| **S3ABlockOutputStream to support huge (many GB) file writes**
+
+This mechanism replaces the (experimental) fast output stream of Hadoop 2.7.x, 
combining better scalability options with instrumentation. Consult the S3A 
documentation to see the extra configuration operations.
+
+
+---
+
+* [MAPREDUCE-6791](https://issues.apache.org/jira/browse/MAPREDUCE-6791) | 
*Minor* | **remove unnecessary dependency from 
hadoop-mapreduce-client-jobclient to hadoop-mapreduce-client-shuffle**
+
+An unnecessary dependency on hadoop-mapreduce-client-shuffle in 
hadoop-mapreduce-client-jobclient has been removed.
+
+
+---
+
+* [HADOOP-7352](https://issues.apache.org/jira/browse/HADOOP-7352) | *Major* | 
**FileSystem#listStatus should throw IOE upon access error**
+
+Change FileSystem#listStatus contract to never return null. Local filesystems 
prior to 3.0.0 returned null upon access error. It is considered erroneous. We 
should expect FileSystem#listStatus to throw IOException upon access error.
+
+
+---
+
+* [HADOOP-13693](https://issues.apache.org/jira/browse/HADOOP-13693) | *Minor* 
| **Remove the message about HTTP OPTIONS in SPNEGO initialization message from 
kms audit log**
+
+kms-audit.log used to show an UNAUTHENTICATED message even for successful 
operations, because of the OPTIONS HTTP request during SPNEGO initial 
handshake. This message brings more confusion than help, and has hence been 
removed.
+
+
+---
+
+* [HDFS-11018](https://issues.apache.org/jira/browse/HDFS-11018) | *Major* | 
**Incorrect check and message in FsDatasetImpl#invalidate**
+
+Improves the error message when datanode removes a replica which is not found.
+
+
+---
+
+* [HDFS-10976](https://issues.apache.org/jira/browse/HDFS-10976) | *Major* | 
**Report erasure coding policy of EC files in Fsck**
+
+Fsck now reports whether a file is replicated and erasure-coded. If it is 
replicated, fsck reports replication factor of the file. If it is erasure 
coded, fsck reports the erasure coding policy of the file.
+
+
+---
+
+* [HDFS-10975](https://issues.apache.org/jira/browse/HDFS-10975) | *Major* | 
**fsck -list-corruptfileblocks does not report corrupt EC files**
+
+Fixed a bug that made fsck -list-corruptfileblocks counts corrupt erasure 
coded files incorrectly.
+
+
+---
+
+* [YARN-5388](https://issues.apache.org/jira/browse/YARN-5388) | *Critical* | 
**Deprecate and remove DockerContainerExecutor**
+
+DockerContainerExecutor is deprecated starting 2.9.0 and removed from 3.0.0. 
Please use LinuxContainerExecutor with the DockerRuntime to run Docker 
containers on YARN clusters.
+
+
+---
+
+* [HADOOP-11798](https://issues.apache.org/jira/browse/HADOOP-11798) | *Major* 
| **Native raw erasure coder in XOR codes**
+
+This provides a native implementation of XOR codec by leveraging Intel ISA-L 
library function to achieve a better performance.
+
+
+---
+
+* [HADOOP-13659](https://issues.apache.org/jira/browse/HADOOP-13659) | *Major* 
| **Upgrade jaxb-api version**
+
+Bump the version of third party dependency jaxb-api to 2.2.11.
+
+
+---
+
+* [YARN-3732](https://issues.apache.org/jira/browse/YARN-3732) | *Minor* | 
**Change NodeHeartbeatResponse.java and RegisterNodeManagerResponse.java as 
abstract classes**
+
+Interface classes has been changed to Abstract class to maintain consistency 
across all other protos.
+
+
+---
+
+* [YARN-5767](https://issues.apache.org/jira/browse/YARN-5767) | *Major* | 
**Fix the order that resources are cleaned up from the local Public/Private 
caches**
+
+This issue fixes a bug in how resources are evicted from the PUBLIC and 
PRIVATE yarn local caches used by the node manager for resource localization. 
In summary, the caches are now properly cleaned based on an LRU policy across 
both the public and private caches.
+
+
+---
+
+* [HDFS-11048](https://issues.apache.org/jira/browse/HDFS-11048) | *Major* | 
**Audit Log should escape control characters**
+
+HDFS audit logs are formatted as individual lines, each of which has a few of 
key-value pair fields. Some of the values come from client request (e.g. src, 
dst). Before this patch the control characters including \t \n etc are not 
escaped in audit logs. That may break lines unexpectedly or introduce 
additional table character (in the worst case, both) within a field. Tools that 
parse audit logs had to deal with this case carefully. After this patch, the 
control characters in the src/dst fields are escaped.
+
+
+---
+
+* [HADOOP-8500](https://issues.apache.org/jira/browse/HADOOP-8500) | *Minor* | 
**Fix javadoc jars to not contain entire target directory**
+
+Hadoop's javadoc jars should be significantly smaller, and contain only 
javadoc.
+
+As a related cleanup, the dummy hadoop-dist-\* jars are no longer generated as 
part of the build.
+
+
+---
+
+* [HADOOP-13792](https://issues.apache.org/jira/browse/HADOOP-13792) | *Major* 
| **Stackoverflow for schemeless defaultFS with trailing slash**
+
+FileSystem#getDefaultUri will throw IllegalArgumentException if default FS has 
no scheme and can not be fixed.
+
+
+---
+
+* [HDFS-10756](https://issues.apache.org/jira/browse/HDFS-10756) | *Major* | 
**Expose getTrashRoot to HTTPFS and WebHDFS**
+
+"getTrashRoot" returns a trash root for a path. Currently in DFS if the path 
"/foo" is a normal path, it returns "/user/$USER/.Trash" for "/foo" and if 
"/foo" is an encrypted zone, it returns "/foo/.Trash/$USER" for the child 
file/dir of "/foo". This patch is about to override the old "getTrashRoot" of 
httpfs and webhdfs, so that the behavior of returning trash root in httpfs and 
webhdfs are consistent with DFS.
+
+
+---
+
+* [HDFS-10970](https://issues.apache.org/jira/browse/HDFS-10970) | *Major* | 
**Update jackson from 1.9.13 to 2.x in hadoop-hdfs**
+
+Removed jackson 1.9.13 dependency from hadoop-hdfs-project module.
+
+
+---
+
+* [YARN-5847](https://issues.apache.org/jira/browse/YARN-5847) | *Major* | 
**Revert health check exit code check**
+
+This change reverts YARN-5567 from 3.0.0-alpha1. The exit codes of the health 
check script are once again ignored.
+
+
+---
+
+* [HDFS-9337](https://issues.apache.org/jira/browse/HDFS-9337) | *Major* | 
**Validate required params for WebHDFS requests**
+
+Strict validations will be done for mandatory parameters for WebHDFS REST 
requests.
+
+
+---
+
+* [HDFS-11116](https://issues.apache.org/jira/browse/HDFS-11116) | *Minor* | 
**Fix javac warnings caused by deprecation of APIs in TestViewFsDefaultValue**
+
+ViewFileSystem#getServerDefaults(Path) throws NotInMountException instead of 
FileNotFoundException for unmounted path.
+
+
+---
+
+* [HADOOP-12718](https://issues.apache.org/jira/browse/HADOOP-12718) | *Major* 
| **Incorrect error message by fs -put local dir without permission**
+
+<!-- markdown -->
+
+The `hadoop fs -ls` command now prints "Permission denied" rather than "No 
such file or directory" when the user doesn't have permission to traverse the 
path.
+
+
+---
+
+* [YARN-5825](https://issues.apache.org/jira/browse/YARN-5825) | *Major* | 
**ProportionalPreemptionalPolicy could use readLock over LeafQueue instead of 
synchronized block**
+
+**WARNING: No release note provided for this change.**
+
+
+---
+
+* [HDFS-11056](https://issues.apache.org/jira/browse/HDFS-11056) | *Major* | 
**Concurrent append and read operations lead to checksum error**
+
+Load last partial chunk checksum properly into memory when converting a 
finalized/temporary replica to rbw replica. This ensures concurrent reader 
reads the correct checksum that matches the data before the update.
+
+
+---
+
+* [YARN-5765](https://issues.apache.org/jira/browse/YARN-5765) | *Blocker* | 
**Revert CHMOD on the new dirs created-LinuxContainerExecutor creates appcache 
and its subdirectories with wrong group owner.**
+
+This change reverts YARN-5287 from 3.0.0-alpha1. chmod clears the set-group-ID 
bit of a regular file hence folder was getting reset with the rights.
+
+
+---
+
+* [YARN-5271](https://issues.apache.org/jira/browse/YARN-5271) | *Major* | 
**ATS client doesn't work with Jersey 2 on the classpath**
+
+A workaround to avoid dependency conflict with Spark2, before a full classpath 
isolation solution is implemented.
+Skip instantiating a Timeline Service client if encountering 
NoClassDefFoundError.
+
+
+---
+
+* [HADOOP-13660](https://issues.apache.org/jira/browse/HADOOP-13660) | *Major* 
| **Upgrade commons-configuration version to 2.1**
+
+Bump commons-configuration version from 1.6 to 2.1
+
+
+---
+
+* [HADOOP-12705](https://issues.apache.org/jira/browse/HADOOP-12705) | *Major* 
| **Upgrade Jackson 2.2.3 to 2.7.8**
+
+We are sorry for causing pain for everyone for whom this Jackson update causes 
problems, but it was proving impossible to stay on the older version: too much 
code had moved past it, and by staying back we were limiting what Hadoop could 
do, and giving everyone who wanted an up to date version of Jackson a different 
set of problems. We've selected Jackson 2.7.8 as it fixed fix a security issue 
in XML parsing, yet proved compatible at the API level with the Hadoop codebase 
--and hopefully everything downstream.
+
+
+---
+
+* [YARN-5713](https://issues.apache.org/jira/browse/YARN-5713) | *Major* | 
**Update jackson from 1.9.13 to 2.x in hadoop-yarn**
+
+Jackson 1.9.13 dependency was removed from hadoop-yarn-project.
+
+
+---
+
+* [HADOOP-13050](https://issues.apache.org/jira/browse/HADOOP-13050) | 
*Blocker* | **Upgrade to AWS SDK 1.11.45**
+
+The dependency on the AWS SDK has been bumped to 1.11.45.
+
+
+---
+
+* [HADOOP-1381](https://issues.apache.org/jira/browse/HADOOP-1381) | *Major* | 
**The distance between sync blocks in SequenceFiles should be configurable**
+
+The default sync interval within new SequenceFile writes is now 100KB, up from 
the older default of 2000B. The sync interval is now also manually configurable 
via the SequenceFile.Writer API.
+
+
+---
+
+* [HDFS-10994](https://issues.apache.org/jira/browse/HDFS-10994) | *Major* | 
**Support an XOR policy XOR-2-1-64k in HDFS**
+
+This introduced a new erasure coding policy named XOR-2-1-64k using the simple 
XOR codec, and it can be used to evaluate HDFS erasure coding feature in a 
small cluster (only 2 + 1 datanodes needed). The policy isn't recommended to be 
used in a production cluster.
+
+
+---
+
+* [MAPREDUCE-6743](https://issues.apache.org/jira/browse/MAPREDUCE-6743) | 
*Major* | **nativetask unit tests need to provide usable output; fix link 
errors during mvn test**
+
+As part of this patch, the Google test framework code was updated to v1.8.0
+
+
+---
+
+* [HADOOP-13706](https://issues.apache.org/jira/browse/HADOOP-13706) | *Major* 
| **Update jackson from 1.9.13 to 2.x in hadoop-common-project**
+
+Removed Jackson 1.9.13 dependency from hadoop-common module.
+
+
+---
+
+* [HADOOP-13812](https://issues.apache.org/jira/browse/HADOOP-13812) | 
*Blocker* | **Upgrade Tomcat to 6.0.48**
+
+Tomcat 6.0.46 starts to filter weak ciphers. Some old SSL clients may be 
affected. It is recommended to upgrade the SSL client. Run the SSL client 
against https://www.howsmyssl.com/a/check to find out its TLS version and 
cipher suites.
+
+
+---
+
+* [HDFS-5517](https://issues.apache.org/jira/browse/HDFS-5517) | *Major* | 
**Lower the default maximum number of blocks per file**
+
+The default value of "dfs.namenode.fs-limits.max-blocks-per-file" has been 
reduced from 1M to 10K.
+
+
+---
+
+* [HADOOP-13827](https://issues.apache.org/jira/browse/HADOOP-13827) | *Major* 
| **Add reencryptEncryptedKey interface to KMS**
+
+A reencryptEncryptedKey interface is added to the KMS, to re-encrypt an 
encrypted key with the latest version of encryption key.
+
+
+---
+
+* [HADOOP-13842](https://issues.apache.org/jira/browse/HADOOP-13842) | *Minor* 
| **Update jackson from 1.9.13 to 2.x in hadoop-maven-plugins**
+
+Jackson 1.9.13 dependency was removed from hadoop-maven-plugins module.
+
+
+---
+
+* [MAPREDUCE-4683](https://issues.apache.org/jira/browse/MAPREDUCE-4683) | 
*Critical* | **Create and distribute hadoop-mapreduce-client-core-tests.jar**
+
+hadoop-mapreduce-client-core module now creates and distributes test jar.
+
+
+---
+
+* [HDFS-11217](https://issues.apache.org/jira/browse/HDFS-11217) | *Major* | 
**Annotate NameNode and DataNode MXBean interfaces as Private/Stable**
+
+The DataNode and NameNode MXBean interfaces have been marked as Private and 
Stable to indicate that although users should not be implementing these 
interfaces directly, the information exposed by these interfaces is part of the 
HDFS public API.
+
+
+---
+
+* [HDFS-11229](https://issues.apache.org/jira/browse/HDFS-11229) | *Blocker* | 
**HDFS-11056 failed to close meta file**
+
+The fix for HDFS-11056 reads meta file to load last partial chunk checksum 
when a block is converted from finalized/temporary to rbw. However, it did not 
close the file explicitly, which may cause number of open files reaching system 
limit. This jira fixes it by closing the file explicitly after the meta file is 
read.
+
+
+---
+
+* [HADOOP-11804](https://issues.apache.org/jira/browse/HADOOP-11804) | *Major* 
| **Shaded Hadoop client artifacts and minicluster**
+
+<!-- markdown -->
+
+The `hadoop-client` Maven artifact available in 2.x releases pulls
+Hadoop's transitive dependencies onto a Hadoop application's classpath.
+This can be problematic if the versions of these transitive dependencies
+conflict with the versions used by the application.
+
+[HADOOP-11804](https://issues.apache.org/jira/browse/HADOOP-11804) adds
+new `hadoop-client-api` and `hadoop-client-runtime` artifacts that
+shade Hadoop's dependencies into a single jar. This avoids leaking
+Hadoop's dependencies onto the application's classpath.
+
+
+---
+
+* [HDFS-11160](https://issues.apache.org/jira/browse/HDFS-11160) | *Major* | 
**VolumeScanner reports write-in-progress replicas as corrupt incorrectly**
+
+Fixed a race condition that caused VolumeScanner to recognize a good replica 
as a bad one if the replica is also being written concurrently.
+
+
+---
+
+* [HADOOP-13597](https://issues.apache.org/jira/browse/HADOOP-13597) | *Major* 
| **Switch KMS from Tomcat to Jetty**
+
+The following environment variables are deprecated. Set the corresponding
+configuration properties instead.
+
+Environment Variable     \| Configuration Property       \| Configuration File
+-------------------------\|------------------------------\|--------------------
+KMS\_HTTP\_PORT            \| hadoop.kms.http.port         \| kms-site.xml
+KMS\_MAX\_HTTP\_HEADER\_SIZE \| hadoop.http.max.request.header.size and 
hadoop.http.max.response.header.size \| kms-site.xml
+KMS\_MAX\_THREADS          \| hadoop.http.max.threads      \| kms-site.xml
+KMS\_SSL\_ENABLED          \| hadoop.kms.ssl.enabled       \| kms-site.xml
+KMS\_SSL\_KEYSTORE\_FILE    \| ssl.server.keystore.location \| ssl-server.xml
+KMS\_SSL\_KEYSTORE\_PASS    \| ssl.server.keystore.password \| ssl-server.xml
+KMS\_TEMP                 \| hadoop.http.temp.dir         \| kms-site.xml
+
+These default HTTP Services have been added.
+
+Name               \| Description
+-------------------\|------------------------------------
+/conf              \| Display configuration properties
+/jmx               \| Java JMX management interface
+/logLevel          \| Get or set log level per class
+/logs              \| Display log files
+/stacks            \| Display JVM stacks
+/static/index.html \| The static home page
+
+Script kms.sh has been deprecated, use 'hadoop kms' instead. Conform to the 
Hadoop shell scripting framework. Support 'hadoop daemonlog'. Read SSL 
configurations from ssl-server.xml, like many other Hadoop components.
+
+
+---
+
+* [HADOOP-13953](https://issues.apache.org/jira/browse/HADOOP-13953) | *Major* 
| **Make FTPFileSystem's data connection mode and transfer mode configurable**
+
+Added two configuration key fs.ftp.data.connection.mode and 
fs.ftp.transfer.mode, and configure FTP data connection mode and transfer mode 
accordingly.
+
+
+---
+
+* [YARN-6071](https://issues.apache.org/jira/browse/YARN-6071) | *Blocker* | 
**Fix incompatible API change on AM-RM protocol due to YARN-3866 (trunk only)**
+
+**WARNING: No release note provided for this change.**
+
+
+---
+
+* [HADOOP-13673](https://issues.apache.org/jira/browse/HADOOP-13673) | *Major* 
| **Update scripts to be smarter when running with privilege**
+
+Apache Hadoop is now able to switch to the appropriate user prior to launching 
commands so long as the command is being run with a privileged user and the 
appropriate set of \_USER variables are defined.  This re-enables 
sbin/start-all.sh and sbin/stop-all.sh as well as fixes the sbin/start-dfs.sh 
and sbin/stop-dfs.sh to work with both secure and unsecure systems.
+
+
+---
+
+* [HADOOP-13964](https://issues.apache.org/jira/browse/HADOOP-13964) | *Major* 
| **Remove vestigal templates directories creation**
+
+This patch removes share/hadoop/{hadoop,hdfs,mapred,yarn}/templates 
directories and contents.
+
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ff02bdfe/hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_3.0.0-alpha2.xml
----------------------------------------------------------------------
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_3.0.0-alpha2.xml
 
b/hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_3.0.0-alpha2.xml
new file mode 100644
index 0000000..21509d5
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_3.0.0-alpha2.xml
@@ -0,0 +1,326 @@
+<?xml version="1.0" encoding="iso-8859-1" standalone="no"?>
+<!-- Generated by the JDiff Javadoc doclet -->
+<!-- (http://www.jdiff.org) -->
+<!-- on Fri Jan 20 19:12:45 UTC 2017 -->
+
+<api
+  xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
+  xsi:noNamespaceSchemaLocation='api.xsd'
+  name="Apache Hadoop HDFS 3.0.0-alpha2"
+  jdversion="1.0.9">
+
+<!--  Command line arguments =  -doclet 
org.apache.hadoop.classification.tools.IncludePublicAnnotationsJDiffDoclet 
-docletpath 
/build/source/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar:/build/source/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar
 -verbose -classpath 
/build/source/hadoop-hdfs-project/hadoop-hdfs/target/classes:/build/source/hadoop-common-project/hadoop-annotations/target/hadoop-annotations-3.0.0-alpha2.jar:/usr/lib/jvm/java-8-oracle/lib/tools.jar:/build/source/hadoop-common-project/hadoop-auth/target/hadoop-auth-3.0.0-alpha2.jar:/maven/org/slf4j/slf4j-api/1.7.10/slf4j-api-1.7.10.jar:/maven/org/apache/httpcomponents/httpclient/4.5.2/httpclient-4.5.2.jar:/maven/org/apache/httpcomponents/httpcore/4.4.4/httpcore-4.4.4.jar:/maven/com/nimbusds/nimbus-jose-jwt/3.9/nimbus-jose-jwt-3.9.jar:/maven/net/jcip/jcip-annotations/1.0/jcip-annotations-1.0.jar:/maven/net/minidev/json-smart/1.1.1/json-smart-1.1.1.jar:/maven/org/apache/zookeeper/zookeeper/3.4.6/zookeep
 
er-3.4.6.jar:/maven/jline/jline/0.9.94/jline-0.9.94.jar:/maven/org/apache/curator/curator-framework/2.7.1/curator-framework-2.7.1.jar:/maven/org/apache/kerby/kerb-simplekdc/1.0.0-RC2/kerb-simplekdc-1.0.0-RC2.jar:/maven/org/apache/kerby/kerby-config/1.0.0-RC2/kerby-config-1.0.0-RC2.jar:/maven/org/apache/kerby/kerb-core/1.0.0-RC2/kerb-core-1.0.0-RC2.jar:/maven/org/apache/kerby/kerby-asn1/1.0.0-RC2/kerby-asn1-1.0.0-RC2.jar:/maven/org/apache/kerby/kerby-pkix/1.0.0-RC2/kerby-pkix-1.0.0-RC2.jar:/maven/org/apache/kerby/kerby-util/1.0.0-RC2/kerby-util-1.0.0-RC2.jar:/maven/org/apache/kerby/kerb-client/1.0.0-RC2/kerb-client-1.0.0-RC2.jar:/maven/org/apache/kerby/kerb-common/1.0.0-RC2/kerb-common-1.0.0-RC2.jar:/maven/org/apache/kerby/kerb-util/1.0.0-RC2/kerb-util-1.0.0-RC2.jar:/maven/org/apache/kerby/kerb-crypto/1.0.0-RC2/kerb-crypto-1.0.0-RC2.jar:/maven/org/apache/kerby/kerb-server/1.0.0-RC2/kerb-server-1.0.0-RC2.jar:/maven/org/apache/kerby/kerb-identity/1.0.0-RC2/kerb-identity-1.0.0-RC2.jar:/
 
maven/org/apache/kerby/kerb-admin/1.0.0-RC2/kerb-admin-1.0.0-RC2.jar:/build/source/hadoop-common-project/hadoop-common/target/hadoop-common-3.0.0-alpha2.jar:/maven/org/apache/commons/commons-math3/3.1.1/commons-math3-3.1.1.jar:/maven/commons-net/commons-net/3.1/commons-net-3.1.jar:/maven/commons-collections/commons-collections/3.2.2/commons-collections-3.2.2.jar:/maven/org/eclipse/jetty/jetty-servlet/9.3.11.v20160721/jetty-servlet-9.3.11.v20160721.jar:/maven/org/eclipse/jetty/jetty-security/9.3.11.v20160721/jetty-security-9.3.11.v20160721.jar:/maven/org/eclipse/jetty/jetty-webapp/9.3.11.v20160721/jetty-webapp-9.3.11.v20160721.jar:/maven/org/eclipse/jetty/jetty-xml/9.3.11.v20160721/jetty-xml-9.3.11.v20160721.jar:/maven/javax/servlet/jsp/jsp-api/2.1/jsp-api-2.1.jar:/maven/com/sun/jersey/jersey-servlet/1.19/jersey-servlet-1.19.jar:/maven/com/sun/jersey/jersey-json/1.19/jersey-json-1.19.jar:/maven/org/codehaus/jettison/jettison/1.1/jettison-1.1.jar:/maven/com/sun/xml/bind/jaxb-impl/2.2.
 
3-1/jaxb-impl-2.2.3-1.jar:/maven/javax/xml/bind/jaxb-api/2.2.11/jaxb-api-2.2.11.jar:/maven/org/codehaus/jackson/jackson-core-asl/1.9.13/jackson-core-asl-1.9.13.jar:/maven/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar:/maven/org/codehaus/jackson/jackson-jaxrs/1.9.13/jackson-jaxrs-1.9.13.jar:/maven/org/codehaus/jackson/jackson-xc/1.9.13/jackson-xc-1.9.13.jar:/maven/net/java/dev/jets3t/jets3t/0.9.0/jets3t-0.9.0.jar:/maven/com/jamesmurty/utils/java-xmlbuilder/0.4/java-xmlbuilder-0.4.jar:/maven/commons-beanutils/commons-beanutils/1.9.3/commons-beanutils-1.9.3.jar:/maven/org/apache/commons/commons-configuration2/2.1/commons-configuration2-2.1.jar:/maven/org/apache/commons/commons-lang3/3.3.2/commons-lang3-3.3.2.jar:/maven/org/apache/avro/avro/1.7.4/avro-1.7.4.jar:/maven/com/thoughtworks/paranamer/paranamer/2.3/paranamer-2.3.jar:/maven/org/xerial/snappy/snappy-java/1.0.4.1/snappy-java-1.0.4.1.jar:/maven/com/google/re2j/re2j/1.0/re2j-1.0.jar:/maven/com/google/
 
code/gson/gson/2.2.4/gson-2.2.4.jar:/maven/com/jcraft/jsch/0.1.51/jsch-0.1.51.jar:/maven/org/apache/curator/curator-client/2.7.1/curator-client-2.7.1.jar:/maven/org/apache/curator/curator-recipes/2.7.1/curator-recipes-2.7.1.jar:/maven/com/google/code/findbugs/jsr305/3.0.0/jsr305-3.0.0.jar:/maven/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar:/maven/org/tukaani/xz/1.0/xz-1.0.jar:/build/source/hadoop-hdfs-project/hadoop-hdfs-client/target/hadoop-hdfs-client-3.0.0-alpha2.jar:/maven/com/squareup/okhttp/okhttp/2.4.0/okhttp-2.4.0.jar:/maven/com/squareup/okio/okio/1.4.0/okio-1.4.0.jar:/maven/com/fasterxml/jackson/core/jackson-annotations/2.7.8/jackson-annotations-2.7.8.jar:/maven/com/google/guava/guava/11.0.2/guava-11.0.2.jar:/maven/org/eclipse/jetty/jetty-server/9.3.11.v20160721/jetty-server-9.3.11.v20160721.jar:/maven/org/eclipse/jetty/jetty-http/9.3.11.v20160721/jetty-http-9.3.11.v20160721.jar:/maven/org/eclipse/jetty/jetty-io/9.3.11.v20160721/jetty-io-9.3.11.v2016
 
0721.jar:/maven/org/eclipse/jetty/jetty-util/9.3.11.v20160721/jetty-util-9.3.11.v20160721.jar:/maven/org/eclipse/jetty/jetty-util-ajax/9.3.11.v20160721/jetty-util-ajax-9.3.11.v20160721.jar:/maven/com/sun/jersey/jersey-core/1.19/jersey-core-1.19.jar:/maven/javax/ws/rs/jsr311-api/1.1.1/jsr311-api-1.1.1.jar:/maven/com/sun/jersey/jersey-server/1.19/jersey-server-1.19.jar:/maven/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/maven/commons-codec/commons-codec/1.4/commons-codec-1.4.jar:/maven/commons-io/commons-io/2.4/commons-io-2.4.jar:/maven/commons-lang/commons-lang/2.6/commons-lang-2.6.jar:/maven/commons-logging/commons-logging/1.1.3/commons-logging-1.1.3.jar:/maven/commons-daemon/commons-daemon/1.0.13/commons-daemon-1.0.13.jar:/maven/log4j/log4j/1.2.17/log4j-1.2.17.jar:/maven/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar:/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar:/maven/org/slf4j/slf4j-log4j12/1.7.10/slf4j-log4j12-1.7.10.jar:/maven/xml
 
enc/xmlenc/0.52/xmlenc-0.52.jar:/maven/io/netty/netty/3.10.5.Final/netty-3.10.5.Final.jar:/maven/io/netty/netty-all/4.1.0.Beta5/netty-all-4.1.0.Beta5.jar:/maven/com/twitter/hpack/0.11.0/hpack-0.11.0.jar:/maven/xerces/xercesImpl/2.9.1/xercesImpl-2.9.1.jar:/maven/xml-apis/xml-apis/1.3.04/xml-apis-1.3.04.jar:/maven/org/apache/htrace/htrace-core4/4.1.0-incubating/htrace-core4-4.1.0-incubating.jar:/maven/org/fusesource/leveldbjni/leveldbjni-all/1.8/leveldbjni-all-1.8.jar:/maven/com/fasterxml/jackson/core/jackson-databind/2.7.8/jackson-databind-2.7.8.jar:/maven/com/fasterxml/jackson/core/jackson-core/2.7.8/jackson-core-2.7.8.jar
 -sourcepath /build/source/hadoop-hdfs-project/hadoop-hdfs/src/main/java 
-doclet 
org.apache.hadoop.classification.tools.IncludePublicAnnotationsJDiffDoclet 
-docletpath 
/build/source/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar:/build/source/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar
 -apidir /build/source/hadoop-hdfs-project/hadoop-hdfs/target
 /site/jdiff/xml -apiname Apache Hadoop HDFS 3.0.0-alpha2 -->
+<package name="org.apache.hadoop.hdfs">
+  <doc>
+  <![CDATA[<p>A distributed implementation of {@link
+org.apache.hadoop.fs.FileSystem}.  This is loosely modelled after
+Google's <a href="http://research.google.com/archive/gfs.html";>GFS</a>.</p>
+
+<p>The most important difference is that unlike GFS, Hadoop DFS files 
+have strictly one writer at any one time.  Bytes are always appended 
+to the end of the writer's stream.  There is no notion of "record appends"
+or "mutations" that are then checked or reordered.  Writers simply emit 
+a byte stream.  That byte stream is guaranteed to be stored in the 
+order written.</p>]]>
+  </doc>
+</package>
+<package name="org.apache.hadoop.hdfs.net">
+</package>
+<package name="org.apache.hadoop.hdfs.protocol">
+</package>
+<package name="org.apache.hadoop.hdfs.protocol.datatransfer">
+</package>
+<package name="org.apache.hadoop.hdfs.protocol.datatransfer.sasl">
+</package>
+<package name="org.apache.hadoop.hdfs.protocolPB">
+</package>
+<package name="org.apache.hadoop.hdfs.qjournal.client">
+</package>
+<package name="org.apache.hadoop.hdfs.qjournal.protocol">
+</package>
+<package name="org.apache.hadoop.hdfs.qjournal.protocolPB">
+</package>
+<package name="org.apache.hadoop.hdfs.qjournal.server">
+  <!-- start interface 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeMXBean -->
+  <interface name="JournalNodeMXBean"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="getJournalsStatus" return="java.lang.String"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get status information (e.g., whether formatted) of 
JournalNode's journals.
+ 
+ @return A string presenting status for each journal]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[This is the JMX management interface for JournalNode 
information]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.hdfs.qjournal.server.JournalNodeMXBean 
-->
+</package>
+<package name="org.apache.hadoop.hdfs.security.token.block">
+</package>
+<package name="org.apache.hadoop.hdfs.security.token.delegation">
+</package>
+<package name="org.apache.hadoop.hdfs.server.balancer">
+</package>
+<package name="org.apache.hadoop.hdfs.server.blockmanagement">
+</package>
+<package name="org.apache.hadoop.hdfs.server.common">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.fsdataset">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.fsdataset.impl">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.metrics">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.web">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.web.dtp">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.web.webhdfs">
+</package>
+<package name="org.apache.hadoop.hdfs.server.diskbalancer">
+</package>
+<package name="org.apache.hadoop.hdfs.server.diskbalancer.command">
+</package>
+<package name="org.apache.hadoop.hdfs.server.diskbalancer.connectors">
+</package>
+<package name="org.apache.hadoop.hdfs.server.diskbalancer.datamodel">
+</package>
+<package name="org.apache.hadoop.hdfs.server.diskbalancer.planner">
+</package>
+<package name="org.apache.hadoop.hdfs.server.mover">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode">
+  <!-- start interface org.apache.hadoop.hdfs.server.namenode.AuditLogger -->
+  <interface name="AuditLogger"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="initialize"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <doc>
+      <![CDATA[Called during initialization of the logger.
+
+ @param conf The configuration object.]]>
+      </doc>
+    </method>
+    <method name="logAuditEvent"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="stat" type="org.apache.hadoop.fs.FileStatus"/>
+      <doc>
+      <![CDATA[Called to log an audit event.
+ <p>
+ This method must return as quickly as possible, since it's called
+ in a critical section of the NameNode's operation.
+
+ @param succeeded Whether authorization succeeded.
+ @param userName Name of the user executing the request.
+ @param addr Remote address of the request.
+ @param cmd The requested command.
+ @param src Path of affected source file.
+ @param dst Path of affected destination file (if any).
+ @param stat File information for operations that change the file's
+             metadata (permissions, owner, times, etc).]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[Interface defining an audit logger.]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.hdfs.server.namenode.AuditLogger -->
+  <!-- start class org.apache.hadoop.hdfs.server.namenode.HdfsAuditLogger -->
+  <class name="HdfsAuditLogger" extends="java.lang.Object"
+    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.hdfs.server.namenode.AuditLogger"/>
+    <constructor name="HdfsAuditLogger"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="logAuditEvent"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="status" type="org.apache.hadoop.fs.FileStatus"/>
+    </method>
+    <method name="logAuditEvent"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="stat" type="org.apache.hadoop.fs.FileStatus"/>
+      <param name="callerContext" type="org.apache.hadoop.ipc.CallerContext"/>
+      <param name="ugi" 
type="org.apache.hadoop.security.UserGroupInformation"/>
+      <param name="dtSecretManager" 
type="org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSecretManager"/>
+      <doc>
+      <![CDATA[Same as
+ {@link #logAuditEvent(boolean, String, InetAddress, String, String, String,
+ FileStatus)} with additional parameters related to logging delegation token
+ tracking IDs.
+ 
+ @param succeeded Whether authorization succeeded.
+ @param userName Name of the user executing the request.
+ @param addr Remote address of the request.
+ @param cmd The requested command.
+ @param src Path of affected source file.
+ @param dst Path of affected destination file (if any).
+ @param stat File information for operations that change the file's metadata
+          (permissions, owner, times, etc).
+ @param callerContext Context information of the caller
+ @param ugi UserGroupInformation of the current user, or null if not logging
+          token tracking information
+ @param dtSecretManager The token secret manager, or null if not logging
+          token tracking information]]>
+      </doc>
+    </method>
+    <method name="logAuditEvent"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="stat" type="org.apache.hadoop.fs.FileStatus"/>
+      <param name="ugi" 
type="org.apache.hadoop.security.UserGroupInformation"/>
+      <param name="dtSecretManager" 
type="org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSecretManager"/>
+      <doc>
+      <![CDATA[Same as
+ {@link #logAuditEvent(boolean, String, InetAddress, String, String,
+ String, FileStatus, CallerContext, UserGroupInformation,
+ DelegationTokenSecretManager)} without {@link CallerContext} information.]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[Extension of {@link AuditLogger}.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.hdfs.server.namenode.HdfsAuditLogger -->
+  <!-- start class 
org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider -->
+  <class name="INodeAttributeProvider" extends="java.lang.Object"
+    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="INodeAttributeProvider"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="start"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Initialize the provider. This method is called at NameNode 
startup
+ time.]]>
+      </doc>
+    </method>
+    <method name="stop"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Shutdown the provider. This method is called at NameNode 
shutdown time.]]>
+      </doc>
+    </method>
+    <method name="getAttributes" 
return="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="fullPath" type="java.lang.String"/>
+      <param name="inode" 
type="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"/>
+    </method>
+    <method name="getAttributes" 
return="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="pathElements" type="java.lang.String[]"/>
+      <param name="inode" 
type="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"/>
+    </method>
+    <method name="getAttributes" 
return="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="components" type="byte[][]"/>
+      <param name="inode" 
type="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"/>
+    </method>
+    <method name="getExternalAccessControlEnforcer" 
return="org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider.AccessControlEnforcer"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="defaultEnforcer" 
type="org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider.AccessControlEnforcer"/>
+      <doc>
+      <![CDATA[Can be over-ridden by implementations to provide a custom 
Access Control
+ Enforcer that can provide an alternate implementation of the
+ default permission checking logic.
+ @param defaultEnforcer The Default AccessControlEnforcer
+ @return The AccessControlEnforcer to use]]>
+      </doc>
+    </method>
+  </class>
+  <!-- end class org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider 
-->
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.ha">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.metrics">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.snapshot">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.top">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.top.metrics">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.top.window">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.web.resources">
+</package>
+<package name="org.apache.hadoop.hdfs.server.protocol">
+</package>
+<package name="org.apache.hadoop.hdfs.tools">
+</package>
+<package name="org.apache.hadoop.hdfs.tools.erasurecode">
+</package>
+<package name="org.apache.hadoop.hdfs.tools.offlineEditsViewer">
+</package>
+<package name="org.apache.hadoop.hdfs.tools.offlineImageViewer">
+</package>
+<package name="org.apache.hadoop.hdfs.tools.snapshot">
+</package>
+<package name="org.apache.hadoop.hdfs.util">
+</package>
+<package name="org.apache.hadoop.hdfs.web">
+</package>
+<package name="org.apache.hadoop.hdfs.web.resources">
+</package>
+
+</api>


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to