Repository: hadoop
Updated Branches:
  refs/heads/branch-3.2 34387599c -> f6227367f


Add 2.9.2 release notes and changes documents.

(cherry picked from commit 1a00b4e325146988375c9ce5b11016c45f059a4e)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f6227367
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f6227367
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f6227367

Branch: refs/heads/branch-3.2
Commit: f6227367fbb3b6c034446f884ad3ecab672a73e1
Parents: 3438759
Author: Akira Ajisaka <aajis...@apache.org>
Authored: Tue Nov 20 12:55:40 2018 +0900
Committer: Akira Ajisaka <aajis...@apache.org>
Committed: Tue Nov 20 14:25:48 2018 +0900

----------------------------------------------------------------------
 .../markdown/release/2.9.2/CHANGELOG.2.9.2.md   |  80 ++++-
 .../release/2.9.2/RELEASENOTES.2.9.2.md         |  12 +-
 .../jdiff/Apache_Hadoop_HDFS_2.9.2.xml          | 312 +++++++++++++++++++
 3 files changed, 399 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f6227367/hadoop-common-project/hadoop-common/src/site/markdown/release/2.9.2/CHANGELOG.2.9.2.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.9.2/CHANGELOG.2.9.2.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.9.2/CHANGELOG.2.9.2.md
index e0a167d..cf5e59b 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.9.2/CHANGELOG.2.9.2.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.9.2/CHANGELOG.2.9.2.md
@@ -16,10 +16,20 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 -->
-# Apache Hadoop Changelog
+# "Apache Hadoop" Changelog
 
-## Release 2.9.2 - Unreleased (as of 2018-09-02)
+## Release 2.9.2 - 2018-11-19
 
+### INCOMPATIBLE CHANGES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+
+
+### IMPORTANT ISSUES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
 
 
 ### NEW FEATURES:
@@ -34,23 +44,28 @@
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |:---- |:---- | :--- |:---- |:---- |:---- |
 | [HADOOP-14987](https://issues.apache.org/jira/browse/HADOOP-14987) | Improve 
KMSClientProvider log around delegation token checking |  Major | . | Xiaoyu 
Yao | Xiaoyu Yao |
+| [YARN-7274](https://issues.apache.org/jira/browse/YARN-7274) | Ability to 
disable elasticity at leaf queue level |  Major | capacityscheduler | Scott 
Brokaw | Zian Chen |
 | [HADOOP-15394](https://issues.apache.org/jira/browse/HADOOP-15394) | 
Backport PowerShell NodeFencer HADOOP-14309 to branch-2 |  Minor | . | Íñigo 
Goiri | Íñigo Goiri |
 | [HDFS-13462](https://issues.apache.org/jira/browse/HDFS-13462) | Add 
BIND\_HOST configuration for JournalNode's HTTP and RPC Servers |  Major | 
hdfs, journal-node | Lukas Majercak | Lukas Majercak |
 | [HADOOP-14841](https://issues.apache.org/jira/browse/HADOOP-14841) | Kms 
client should disconnect if unable to get output stream from connection. |  
Major | kms | Xiao Chen | Rushabh S Shah |
 | [HDFS-13272](https://issues.apache.org/jira/browse/HDFS-13272) | 
DataNodeHttpServer to have configurable HttpServer2 threads |  Major | datanode 
| Erik Krogen | Erik Krogen |
 | [HADOOP-15441](https://issues.apache.org/jira/browse/HADOOP-15441) | Log kms 
url and token service at debug level. |  Minor | . | Wei-Chiu Chuang | Gabor 
Bota |
-| [HDFS-13544](https://issues.apache.org/jira/browse/HDFS-13544) | Improve 
logging for JournalNode in federated cluster |  Major | federation, hdfs | 
Hanisha Koneru | Hanisha Koneru |
 | [HADOOP-15486](https://issues.apache.org/jira/browse/HADOOP-15486) | Make 
NetworkTopology#netLock fair |  Major | net | Nanda kumar | Nanda kumar |
 | [HADOOP-15449](https://issues.apache.org/jira/browse/HADOOP-15449) | 
Increase default timeout of ZK session to avoid frequent NameNode failover |  
Critical | common | Karthik Palanisamy | Karthik Palanisamy |
 | [HDFS-13602](https://issues.apache.org/jira/browse/HDFS-13602) | Add 
checkOperation(WRITE) checks in FSNamesystem |  Major | ha, namenode | Erik 
Krogen | Chao Sun |
 | [HDFS-13653](https://issues.apache.org/jira/browse/HDFS-13653) | Make 
dfs.client.failover.random.order a per nameservice configuration |  Major | 
federation | Ekanth Sethuramalingam | Ekanth Sethuramalingam |
+| [HDFS-13686](https://issues.apache.org/jira/browse/HDFS-13686) | Add overall 
metrics for FSNamesystemLock |  Major | hdfs, namenode | Lukas Majercak | Lukas 
Majercak |
 | [HDFS-13714](https://issues.apache.org/jira/browse/HDFS-13714) | Fix 
TestNameNodePrunesMissingStorages test failures on Windows |  Major | hdfs, 
namenode, test | Lukas Majercak | Lukas Majercak |
+| [HDFS-13719](https://issues.apache.org/jira/browse/HDFS-13719) | Docs around 
dfs.image.transfer.timeout are misleading |  Major | documentation | Kitti 
Nanasi | Kitti Nanasi |
 | [HDFS-11060](https://issues.apache.org/jira/browse/HDFS-11060) | make 
DEFAULT\_MAX\_CORRUPT\_FILEBLOCKS\_RETURNED configurable |  Minor | hdfs | 
Lantao Jin | Lantao Jin |
 | [HDFS-13813](https://issues.apache.org/jira/browse/HDFS-13813) | Exit 
NameNode if dangling child inode is detected when saving FsImage |  Major | 
hdfs, namenode | Siyao Meng | Siyao Meng |
 | [HDFS-13821](https://issues.apache.org/jira/browse/HDFS-13821) | RBF: Add 
dfs.federation.router.mount-table.cache.enable so that users can disable cache 
|  Major | hdfs | Fei Hui | Fei Hui |
 | [HADOOP-15689](https://issues.apache.org/jira/browse/HADOOP-15689) | Add 
"\*.patch" into .gitignore file of branch-2 |  Major | . | Rui Gao | Rui Gao |
 | [HDFS-13854](https://issues.apache.org/jira/browse/HDFS-13854) | RBF: The 
ProcessingAvgTime and ProxyAvgTime should display by JMX with ms unit. |  Major 
| federation, hdfs | yanghuafeng | yanghuafeng |
 | [YARN-8051](https://issues.apache.org/jira/browse/YARN-8051) | 
TestRMEmbeddedElector#testCallbackSynchronization is flakey |  Major | test | 
Robert Kanter | Robert Kanter |
+| [HDFS-13857](https://issues.apache.org/jira/browse/HDFS-13857) | RBF: Choose 
to enable the default nameservice to read/write files |  Major | federation, 
hdfs | yanghuafeng | yanghuafeng |
+| [HDFS-13812](https://issues.apache.org/jira/browse/HDFS-13812) | Fix the 
inconsistent default refresh interval on Caching documentation |  Trivial | 
documentation | BELUGA BEHR | Hrishikesh Gadre |
+| [HDFS-13902](https://issues.apache.org/jira/browse/HDFS-13902) |  Add JMX, 
conf and stacks menus to the datanode page |  Minor | datanode | fengchuang | 
fengchuang |
 
 
 ### BUG FIXES:
@@ -58,6 +73,8 @@
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |:---- |:---- | :--- |:---- |:---- |:---- |
 | [HADOOP-15121](https://issues.apache.org/jira/browse/HADOOP-15121) | 
Encounter NullPointerException when using DecayRpcScheduler |  Major | . | Tao 
Jie | Tao Jie |
+| [YARN-7765](https://issues.apache.org/jira/browse/YARN-7765) | [Atsv2] 
GSSException: No valid credentials provided - Failed to find any Kerberos tgt 
thrown by Timelinev2Client & HBaseClient in NM |  Blocker | . | Sumana Sathish 
| Rohith Sharma K S |
+| [MAPREDUCE-7027](https://issues.apache.org/jira/browse/MAPREDUCE-7027) | 
HadoopArchiveLogs shouldn't delete the original logs if the HAR creation fails 
|  Critical | harchive | Gergely Novák | Gergely Novák |
 | [HDFS-10803](https://issues.apache.org/jira/browse/HDFS-10803) | 
TestBalancerWithMultipleNameNodes#testBalancing2OutOf3Blockpools fails 
intermittently due to no free space available |  Major | . | Yiqun Lin | Yiqun 
Lin |
 | [YARN-8068](https://issues.apache.org/jira/browse/YARN-8068) | Application 
Priority field causes NPE in app timeline publish when Hadoop 2.7 based clients 
to 2.8+ |  Blocker | yarn | Sunil Govindan | Sunil Govindan |
 | [HADOOP-15317](https://issues.apache.org/jira/browse/HADOOP-15317) | Improve 
NetworkTopology chooseRandom's loop |  Major | . | Xiao Chen | Xiao Chen |
@@ -77,8 +94,10 @@
 | [HDFS-13336](https://issues.apache.org/jira/browse/HDFS-13336) | Test cases 
of TestWriteToReplica failed in windows |  Major | . | Xiao Liang | Xiao Liang |
 | [HADOOP-15385](https://issues.apache.org/jira/browse/HADOOP-15385) | Many 
tests are failing in hadoop-distcp project in branch-2 |  Critical | 
tools/distcp | Rushabh S Shah | Jason Lowe |
 | [HDFS-13509](https://issues.apache.org/jira/browse/HDFS-13509) | Bug fix for 
breakHardlinks() of ReplicaInfo/LocalReplica, and fix TestFileAppend failures 
on Windows |  Major | . | Xiao Liang | Xiao Liang |
+| [MAPREDUCE-7073](https://issues.apache.org/jira/browse/MAPREDUCE-7073) | 
Optimize TokenCache#obtainTokensForNamenodesInternal |  Major | . | Bibin A 
Chundatt | Bibin A Chundatt |
 | [YARN-8232](https://issues.apache.org/jira/browse/YARN-8232) | RMContainer 
lost queue name when RM HA happens |  Major | resourcemanager | Hu Ziqian | Hu 
Ziqian |
 | [HDFS-13537](https://issues.apache.org/jira/browse/HDFS-13537) | 
TestHdfsHelper does not generate jceks path properly for relative path in 
Windows |  Major | . | Xiao Liang | Xiao Liang |
+| [HADOOP-15446](https://issues.apache.org/jira/browse/HADOOP-15446) | WASB: 
PageBlobInputStream.skip breaks HBASE replication |  Major | fs/azure | Thomas 
Marquardt | Thomas Marquardt |
 | [YARN-7003](https://issues.apache.org/jira/browse/YARN-7003) | DRAINING 
state of queues is not recovered after RM restart |  Major | capacityscheduler 
| Tao Yang | Tao Yang |
 | [YARN-8244](https://issues.apache.org/jira/browse/YARN-8244) |  
TestContainerSchedulerQueuing.testStartMultipleContainers failed |  Major | . | 
Miklos Szegedi | Jim Brennan |
 | [HDFS-13581](https://issues.apache.org/jira/browse/HDFS-13581) | DN UI logs 
link is broken when https is enabled |  Minor | datanode | Namit Maheshwari | 
Shashikant Banerjee |
@@ -97,6 +116,7 @@
 | [HDFS-13667](https://issues.apache.org/jira/browse/HDFS-13667) | Typo: 
Marking all "datandoes" as stale |  Trivial | namenode | Wei-Chiu Chuang | 
Nanda kumar |
 | [YARN-8405](https://issues.apache.org/jira/browse/YARN-8405) | RM 
zk-state-store.parent-path ACLs has been changed since HADOOP-14773 |  Major | 
. | Rohith Sharma K S | Íñigo Goiri |
 | [MAPREDUCE-7108](https://issues.apache.org/jira/browse/MAPREDUCE-7108) | 
TestFileOutputCommitter fails on Windows |  Minor | test | Zuoming Zhang | 
Zuoming Zhang |
+| [YARN-8404](https://issues.apache.org/jira/browse/YARN-8404) | Timeline 
event publish need to be async to avoid Dispatcher thread leak in case ATS is 
down |  Blocker | . | Rohith Sharma K S | Rohith Sharma K S |
 | [HDFS-13675](https://issues.apache.org/jira/browse/HDFS-13675) | Speed up 
TestDFSAdminWithHA |  Major | hdfs, namenode | Lukas Majercak | Lukas Majercak |
 | [HDFS-13673](https://issues.apache.org/jira/browse/HDFS-13673) | 
TestNameNodeMetrics fails on Windows |  Minor | test | Zuoming Zhang | Zuoming 
Zhang |
 | [HDFS-13676](https://issues.apache.org/jira/browse/HDFS-13676) | 
TestEditLogRace fails on Windows |  Minor | test | Zuoming Zhang | Zuoming 
Zhang |
@@ -116,6 +136,7 @@
 | [YARN-8577](https://issues.apache.org/jira/browse/YARN-8577) | Fix the 
broken anchor in SLS site-doc |  Minor | documentation | Weiwei Yang | Weiwei 
Yang |
 | [YARN-4606](https://issues.apache.org/jira/browse/YARN-4606) | 
CapacityScheduler: applications could get starved because computation of 
#activeUsers considers pending apps |  Critical | capacity scheduler, 
capacityscheduler | Karam Singh | Manikandan R |
 | [HADOOP-15637](https://issues.apache.org/jira/browse/HADOOP-15637) | 
LocalFs#listLocatedStatus does not filter out hidden .crc files |  Minor | fs | 
Erik Krogen | Erik Krogen |
+| [HADOOP-15644](https://issues.apache.org/jira/browse/HADOOP-15644) | Hadoop 
Docker Image Pip Install Fails on branch-2 |  Critical | build | Haibo Chen | 
Haibo Chen |
 | [YARN-8331](https://issues.apache.org/jira/browse/YARN-8331) | Race 
condition in NM container launched after done |  Major | . | Yang Wang | 
Pradeep Ambati |
 | [HDFS-13758](https://issues.apache.org/jira/browse/HDFS-13758) | 
DatanodeManager should throw exception if it has BlockRecoveryCommand but the 
block is not under construction |  Major | namenode | Wei-Chiu Chuang | chencan 
|
 | [YARN-8612](https://issues.apache.org/jira/browse/YARN-8612) | Fix NM 
Collector Service Port issue in YarnConfiguration |  Major | ATSv2 | Prabha 
Manepalli | Prabha Manepalli |
@@ -123,6 +144,40 @@
 | [YARN-8640](https://issues.apache.org/jira/browse/YARN-8640) | Restore 
previous state in container-executor after failure |  Major | . | Jim Brennan | 
Jim Brennan |
 | [HADOOP-14314](https://issues.apache.org/jira/browse/HADOOP-14314) | The 
OpenSolaris taxonomy link is dead in InterfaceClassification.md |  Major | 
documentation | Daniel Templeton | Rui Gao |
 | [YARN-8649](https://issues.apache.org/jira/browse/YARN-8649) | NPE in 
localizer hearbeat processing if a container is killed while localizing |  
Major | . | lujie | lujie |
+| [HADOOP-10219](https://issues.apache.org/jira/browse/HADOOP-10219) | 
ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient requested 
shutdowns |  Major | ipc | Steve Loughran | Kihwal Lee |
+| [MAPREDUCE-7131](https://issues.apache.org/jira/browse/MAPREDUCE-7131) | Job 
History Server has race condition where it moves files from intermediate to 
finished but thinks file is in intermediate |  Major | . | Anthony Hsu | 
Anthony Hsu |
+| [HDFS-13836](https://issues.apache.org/jira/browse/HDFS-13836) | RBF: Handle 
mount table znode with null value |  Major | federation, hdfs | yanghuafeng | 
yanghuafeng |
+| [YARN-8709](https://issues.apache.org/jira/browse/YARN-8709) | CS preemption 
monitor always fails since one under-served queue was deleted |  Major | 
capacityscheduler, scheduler preemption | Tao Yang | Tao Yang |
+| [HDFS-13051](https://issues.apache.org/jira/browse/HDFS-13051) | Fix dead 
lock during async editlog rolling if edit queue is full |  Major | namenode | 
zhangwei | Daryn Sharp |
+| [YARN-8729](https://issues.apache.org/jira/browse/YARN-8729) | Node status 
updater thread could be lost after it is restarted |  Critical | nodemanager | 
Tao Yang | Tao Yang |
+| [HDFS-13914](https://issues.apache.org/jira/browse/HDFS-13914) | Fix DN UI 
logs link broken when https is enabled after HDFS-13902 |  Minor | datanode | 
Jianfei Jiang | Jianfei Jiang |
+| [MAPREDUCE-7133](https://issues.apache.org/jira/browse/MAPREDUCE-7133) | 
History Server task attempts REST API returns invalid data |  Major | 
jobhistoryserver | Oleksandr Shevchenko | Oleksandr Shevchenko |
+| [YARN-8720](https://issues.apache.org/jira/browse/YARN-8720) | 
CapacityScheduler does not enforce max resource allocation check at queue level 
|  Major | capacity scheduler, capacityscheduler, resourcemanager | Tarun 
Parimi | Tarun Parimi |
+| [HDFS-13844](https://issues.apache.org/jira/browse/HDFS-13844) | Fix the 
fmt\_bytes function in the dfs-dust.js |  Minor | hdfs, ui | yanghuafeng | 
yanghuafeng |
+| [HADOOP-15755](https://issues.apache.org/jira/browse/HADOOP-15755) | 
StringUtils#createStartupShutdownMessage throws NPE when args is null |  Major 
| . | Lokesh Jain | Dinesh Chitlangia |
+| [MAPREDUCE-3801](https://issues.apache.org/jira/browse/MAPREDUCE-3801) | 
org.apache.hadoop.mapreduce.v2.app.TestRuntimeEstimators.testExponentialEstimator
 fails intermittently |  Major | mrv2 | Robert Joseph Evans | Jason Lowe |
+| [MAPREDUCE-7137](https://issues.apache.org/jira/browse/MAPREDUCE-7137) | 
MRAppBenchmark.benchmark1() fails with NullPointerException |  Minor | test | 
Oleksandr Shevchenko | Oleksandr Shevchenko |
+| [MAPREDUCE-7138](https://issues.apache.org/jira/browse/MAPREDUCE-7138) | 
ThrottledContainerAllocator in MRAppBenchmark should implement 
RMHeartbeatHandler |  Minor | test | Oleksandr Shevchenko | Oleksandr 
Shevchenko |
+| [HDFS-13908](https://issues.apache.org/jira/browse/HDFS-13908) | 
TestDataNodeMultipleRegistrations is flaky |  Major | . | Íñigo Goiri | Ayush 
Saxena |
+| [YARN-8804](https://issues.apache.org/jira/browse/YARN-8804) | 
resourceLimits may be wrongly calculated when leaf-queue is blocked in cluster 
with 3+ level queues |  Critical | capacityscheduler | Tao Yang | Tao Yang |
+| [YARN-8774](https://issues.apache.org/jira/browse/YARN-8774) | Memory leak 
when CapacityScheduler allocates from reserved container with non-default label 
|  Critical | capacityscheduler | Tao Yang | Tao Yang |
+| [HADOOP-15817](https://issues.apache.org/jira/browse/HADOOP-15817) | Reuse 
Object Mapper in KMSJSONReader |  Major | kms | Jonathan Eagles | Jonathan 
Eagles |
+| [HADOOP-15820](https://issues.apache.org/jira/browse/HADOOP-15820) | 
ZStandardDecompressor native code sets an integer field as a long |  Blocker | 
. | Jason Lowe | Jason Lowe |
+| [HDFS-13964](https://issues.apache.org/jira/browse/HDFS-13964) | RBF: 
TestRouterWebHDFSContractAppend fails with No Active Namenode under nameservice 
|  Major | . | Ayush Saxena | Ayush Saxena |
+| [HADOOP-15835](https://issues.apache.org/jira/browse/HADOOP-15835) | Reuse 
Object Mapper in KMSJSONWriter |  Major | . | Jonathan Eagles | Jonathan Eagles 
|
+| [HDFS-13976](https://issues.apache.org/jira/browse/HDFS-13976) | Backport 
HDFS-12813 to branch-2.9 |  Major | hdfs, hdfs-client | Lukas Majercak | Lukas 
Majercak |
+| [HADOOP-15679](https://issues.apache.org/jira/browse/HADOOP-15679) | 
ShutdownHookManager shutdown time needs to be configurable & extended |  Major 
| util | Steve Loughran | Steve Loughran |
+| [HDFS-13802](https://issues.apache.org/jira/browse/HDFS-13802) | RBF: Remove 
FSCK from Router Web UI |  Major | . | Fei Hui | Fei Hui |
+| [HADOOP-15859](https://issues.apache.org/jira/browse/HADOOP-15859) | 
ZStandardDecompressor.c mistakes a class for an instance |  Blocker | . | Ben 
Lau | Jason Lowe |
+| [HADOOP-15850](https://issues.apache.org/jira/browse/HADOOP-15850) | 
CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0 
|  Critical | tools/distcp | Ted Yu | Ted Yu |
+| [YARN-7502](https://issues.apache.org/jira/browse/YARN-7502) | Nodemanager 
restart docs should describe nodemanager supervised property |  Major | 
documentation | Jason Lowe | Suma Shivaprasad |
+| [HADOOP-15866](https://issues.apache.org/jira/browse/HADOOP-15866) | Renamed 
HADOOP\_SECURITY\_GROUP\_SHELL\_COMMAND\_TIMEOUT keys break compatibility |  
Blocker | . | Wei-Chiu Chuang | Wei-Chiu Chuang |
+| [HADOOP-15822](https://issues.apache.org/jira/browse/HADOOP-15822) | zstd 
compressor can fail with a small output buffer |  Major | . | Jason Lowe | 
Jason Lowe |
+| [HADOOP-15899](https://issues.apache.org/jira/browse/HADOOP-15899) | Update 
AWS Java SDK versions in NOTICE.txt |  Major | . | Akira Ajisaka | Akira 
Ajisaka |
+| [HADOOP-15900](https://issues.apache.org/jira/browse/HADOOP-15900) | Update 
JSch versions in LICENSE.txt |  Major | . | Akira Ajisaka | Akira Ajisaka |
+| [YARN-8858](https://issues.apache.org/jira/browse/YARN-8858) | 
CapacityScheduler should respect maximum node resource when per-queue 
maximum-allocation is being used. |  Major | . | Sumana Sathish | Wangda Tan |
+| [YARN-8233](https://issues.apache.org/jira/browse/YARN-8233) | NPE in 
CapacityScheduler#tryCommit when handling allocate/reserve proposal whose 
allocatedOrReservedContainer is null |  Critical | capacityscheduler | Tao Yang 
| Tao Yang |
+| [HADOOP-15923](https://issues.apache.org/jira/browse/HADOOP-15923) | 
create-release script should set max-cache-ttl as well as default-cache-ttl for 
gpg-agent |  Blocker | build | Akira Ajisaka | Akira Ajisaka |
 
 
 ### TESTS:
@@ -163,6 +218,7 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |:---- |:---- | :--- |:---- |:---- |:---- |
+| [HDFS-13299](https://issues.apache.org/jira/browse/HDFS-13299) | RBF : Fix 
compilation error in branch-2 (TestMultipleDestinationResolver) |  Blocker | . 
| Brahma Reddy Battula | Brahma Reddy Battula |
 | [HDFS-13353](https://issues.apache.org/jira/browse/HDFS-13353) | RBF: 
TestRouterWebHDFSContractCreate failed |  Major | test | Takanobu Asanuma | 
Takanobu Asanuma |
 | [YARN-8110](https://issues.apache.org/jira/browse/YARN-8110) | AMRMProxy 
recover should catch for all throwable to avoid premature exit |  Major | . | 
Botong Huang | Botong Huang |
 | [HDFS-13402](https://issues.apache.org/jira/browse/HDFS-13402) | RBF: Fix  
java doc for StateStoreFileSystemImpl |  Minor | hdfs | Yiran Wu | Yiran Wu |
@@ -171,7 +227,7 @@
 | [HDFS-13045](https://issues.apache.org/jira/browse/HDFS-13045) | RBF: 
Improve error message returned from subcluster |  Minor | . | Wei Yan | Íñigo 
Goiri |
 | [HDFS-13428](https://issues.apache.org/jira/browse/HDFS-13428) | RBF: Remove 
LinkedList From StateStoreFileImpl.java |  Trivial | federation | BELUGA BEHR | 
BELUGA BEHR |
 | [HDFS-13386](https://issues.apache.org/jira/browse/HDFS-13386) | RBF: Wrong 
date information in list file(-ls) result |  Minor | . | Dibyendu Karmakar | 
Dibyendu Karmakar |
-| [HADOOP-14999](https://issues.apache.org/jira/browse/HADOOP-14999) | 
AliyunOSS: provide one asynchronous multi-part based uploading mechanism |  
Major | fs/oss | Genmao Yu | Genmao Yu |
+| [YARN-7810](https://issues.apache.org/jira/browse/YARN-7810) | 
TestDockerContainerRuntime test failures due to UID lookup of a non-existent 
user |  Major | . | Shane Kumpf | Shane Kumpf |
 | [HDFS-13435](https://issues.apache.org/jira/browse/HDFS-13435) | RBF: 
Improve the error loggings for printing the stack trace |  Major | . | Yiqun 
Lin | Yiqun Lin |
 | [YARN-7189](https://issues.apache.org/jira/browse/YARN-7189) | 
Container-executor doesn't remove Docker containers that error out early |  
Major | yarn | Eric Badger | Eric Badger |
 | [HDFS-13466](https://issues.apache.org/jira/browse/HDFS-13466) | RBF: Add 
more router-related information to the UI |  Minor | . | Wei Yan | Wei Yan |
@@ -188,15 +244,31 @@
 | [YARN-8253](https://issues.apache.org/jira/browse/YARN-8253) | HTTPS Ats v2 
api call fails with "bad HTTP parsed" |  Critical | ATSv2 | Yesha Vora | Charan 
Hebri |
 | [HADOOP-15454](https://issues.apache.org/jira/browse/HADOOP-15454) | 
TestRollingFileSystemSinkWithLocal fails on Windows |  Major | test | Xiao 
Liang | Xiao Liang |
 | [HADOOP-15498](https://issues.apache.org/jira/browse/HADOOP-15498) | 
TestHadoopArchiveLogs (#testGenerateScript, #testPrepareWorkingDir) fails on 
Windows |  Minor | . | Anbang Hu | Anbang Hu |
+| [HADOOP-15497](https://issues.apache.org/jira/browse/HADOOP-15497) | 
TestTrash should use proper test path to avoid failing on Windows |  Minor | . 
| Anbang Hu | Anbang Hu |
 | [HDFS-13637](https://issues.apache.org/jira/browse/HDFS-13637) | RBF: Router 
fails when threadIndex (in ConnectionPool) wraps around Integer.MIN\_VALUE |  
Critical | federation | CR Hota | CR Hota |
 | [YARN-4781](https://issues.apache.org/jira/browse/YARN-4781) | Support 
intra-queue preemption for fairness ordering policy. |  Major | scheduler | 
Wangda Tan | Eric Payne |
 | [HDFS-13281](https://issues.apache.org/jira/browse/HDFS-13281) | 
Namenode#createFile should be /.reserved/raw/ aware. |  Critical | encryption | 
Rushabh S Shah | Rushabh S Shah |
 | [YARN-4677](https://issues.apache.org/jira/browse/YARN-4677) | 
RMNodeResourceUpdateEvent update from scheduler can lead to race condition |  
Major | graceful, resourcemanager, scheduler | Brook Zhou | Wilfred 
Spiegelenburg |
 | [HADOOP-15529](https://issues.apache.org/jira/browse/HADOOP-15529) | 
ContainerLaunch#testInvalidEnvVariableSubstitutionType is not supported in 
Windows |  Minor | . | Giovanni Matteo Fumarola | Giovanni Matteo Fumarola |
 | [HADOOP-15458](https://issues.apache.org/jira/browse/HADOOP-15458) | 
TestLocalFileSystem#testFSOutputStreamBuilder fails on Windows |  Minor | test 
| Xiao Liang | Xiao Liang |
+| [YARN-8481](https://issues.apache.org/jira/browse/YARN-8481) | 
AMRMProxyPolicies should accept heartbeat response from new/unknown subclusters 
|  Minor | amrmproxy, federation | Botong Huang | Botong Huang |
 | [HDFS-13475](https://issues.apache.org/jira/browse/HDFS-13475) | RBF: Admin 
cannot enforce Router enter SafeMode |  Major | . | Wei Yan | Chao Sun |
 | [HDFS-13733](https://issues.apache.org/jira/browse/HDFS-13733) | RBF: Add 
Web UI configurations and descriptions to RBF document |  Minor | documentation 
| Takanobu Asanuma | Takanobu Asanuma |
 | [HDFS-13743](https://issues.apache.org/jira/browse/HDFS-13743) | RBF: Router 
throws NullPointerException due to the invalid initialization of 
MountTableResolver |  Major | . | Takanobu Asanuma | Takanobu Asanuma |
 | [HDFS-13750](https://issues.apache.org/jira/browse/HDFS-13750) | RBF: Router 
ID in RouterRpcClient is always null |  Major | . | Takanobu Asanuma | Takanobu 
Asanuma |
 | [HDFS-13848](https://issues.apache.org/jira/browse/HDFS-13848) | Refactor 
NameNode failover proxy providers |  Major | ha, hdfs-client | Konstantin 
Shvachko | Konstantin Shvachko |
 | [HADOOP-15699](https://issues.apache.org/jira/browse/HADOOP-15699) | Fix 
some of testContainerManager failures in Windows |  Major | . | Botong Huang | 
Botong Huang |
+| [HADOOP-15731](https://issues.apache.org/jira/browse/HADOOP-15731) | 
TestDistributedShell fails on Windows |  Major | . | Botong Huang | Botong 
Huang |
+| [HADOOP-15759](https://issues.apache.org/jira/browse/HADOOP-15759) | 
AliyunOSS: update oss-sdk version to 3.0.0 |  Major | fs/oss | wujinhu | 
wujinhu |
+| [HADOOP-15671](https://issues.apache.org/jira/browse/HADOOP-15671) | 
AliyunOSS: Support Assume Roles in AliyunOSS |  Major | fs/oss | wujinhu | 
wujinhu |
+| [HDFS-13790](https://issues.apache.org/jira/browse/HDFS-13790) | RBF: Move 
ClientProtocol APIs to its own module |  Major | . | Íñigo Goiri | Chao Sun |
+| [HADOOP-15607](https://issues.apache.org/jira/browse/HADOOP-15607) | 
AliyunOSS: fix duplicated partNumber issue in AliyunOSSBlockOutputStream |  
Critical | . | wujinhu | wujinhu |
+| [HADOOP-15868](https://issues.apache.org/jira/browse/HADOOP-15868) | 
AliyunOSS: update document for properties of multiple part download, multiple 
part upload and directory copy |  Major | fs/oss | wujinhu | wujinhu |
+
+
+### OTHER:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f6227367/hadoop-common-project/hadoop-common/src/site/markdown/release/2.9.2/RELEASENOTES.2.9.2.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.9.2/RELEASENOTES.2.9.2.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.9.2/RELEASENOTES.2.9.2.md
index 439933e..5faa4cf 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.9.2/RELEASENOTES.2.9.2.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.9.2/RELEASENOTES.2.9.2.md
@@ -16,6 +16,16 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 -->
-# Apache Hadoop  2.9.2 Release Notes
+# "Apache Hadoop"  2.9.2 Release Notes
 
 These release notes cover new developer and user-facing incompatibilities, 
important issues, features, and major improvements.
+
+
+---
+
+* [HADOOP-15446](https://issues.apache.org/jira/browse/HADOOP-15446) | *Major* 
| **WASB: PageBlobInputStream.skip breaks HBASE replication**
+
+WASB: Bug fix to support non-sequential page blob reads.  Required for HBASE 
replication.
+
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f6227367/hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_2.9.2.xml
----------------------------------------------------------------------
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_2.9.2.xml
 
b/hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_2.9.2.xml
new file mode 100644
index 0000000..8ca51fa
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_2.9.2.xml
@@ -0,0 +1,312 @@
+<?xml version="1.0" encoding="iso-8859-1" standalone="no"?>
+<!-- Generated by the JDiff Javadoc doclet -->
+<!-- (http://www.jdiff.org) -->
+<!-- on Tue Nov 13 16:20:19 UTC 2018 -->
+
+<api
+  xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
+  xsi:noNamespaceSchemaLocation='api.xsd'
+  name="Apache Hadoop HDFS 2.9.2"
+  jdversion="1.0.9">
+
+<!--  Command line arguments =  -doclet 
org.apache.hadoop.classification.tools.IncludePublicAnnotationsJDiffDoclet 
-docletpath 
/build/source/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar:/build/source/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar
 -verbose -classpath 
/build/source/hadoop-hdfs-project/hadoop-hdfs/target/classes:/build/source/hadoop-common-project/hadoop-annotations/target/hadoop-annotations-2.9.2.jar:/usr/lib/jvm/java-7-openjdk-amd64/lib/tools.jar:/build/source/hadoop-common-project/hadoop-auth/target/hadoop-auth-2.9.2.jar:/maven/org/slf4j/slf4j-api/1.7.25/slf4j-api-1.7.25.jar:/maven/org/apache/httpcomponents/httpclient/4.5.2/httpclient-4.5.2.jar:/maven/org/apache/httpcomponents/httpcore/4.4.4/httpcore-4.4.4.jar:/maven/com/nimbusds/nimbus-jose-jwt/4.41.1/nimbus-jose-jwt-4.41.1.jar:/maven/com/github/stephenc/jcip/jcip-annotations/1.0-1/jcip-annotations-1.0-1.jar:/maven/net/minidev/json-smart/1.3.1/json-smart-1.3.1.jar:/maven/org/apache/directory/serv
 
er/apacheds-kerberos-codec/2.0.0-M15/apacheds-kerberos-codec-2.0.0-M15.jar:/maven/org/apache/directory/server/apacheds-i18n/2.0.0-M15/apacheds-i18n-2.0.0-M15.jar:/maven/org/apache/directory/api/api-asn1-api/1.0.0-M20/api-asn1-api-1.0.0-M20.jar:/maven/org/apache/directory/api/api-util/1.0.0-M20/api-util-1.0.0-M20.jar:/maven/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar:/maven/jline/jline/0.9.94/jline-0.9.94.jar:/maven/org/apache/curator/curator-framework/2.7.1/curator-framework-2.7.1.jar:/build/source/hadoop-common-project/hadoop-common/target/hadoop-common-2.9.2.jar:/maven/org/apache/commons/commons-math3/3.1.1/commons-math3-3.1.1.jar:/maven/commons-net/commons-net/3.1/commons-net-3.1.jar:/maven/commons-collections/commons-collections/3.2.2/commons-collections-3.2.2.jar:/maven/org/mortbay/jetty/jetty-sslengine/6.1.26/jetty-sslengine-6.1.26.jar:/maven/javax/servlet/jsp/jsp-api/2.1/jsp-api-2.1.jar:/maven/com/sun/jersey/jersey-json/1.9/jersey-json-1.9.jar:/maven/org/codehaus
 
/jettison/jettison/1.1/jettison-1.1.jar:/maven/com/sun/xml/bind/jaxb-impl/2.2.3-1/jaxb-impl-2.2.3-1.jar:/maven/javax/xml/bind/jaxb-api/2.2.2/jaxb-api-2.2.2.jar:/maven/javax/xml/stream/stax-api/1.0-2/stax-api-1.0-2.jar:/maven/javax/activation/activation/1.1/activation-1.1.jar:/maven/org/codehaus/jackson/jackson-jaxrs/1.9.13/jackson-jaxrs-1.9.13.jar:/maven/org/codehaus/jackson/jackson-xc/1.9.13/jackson-xc-1.9.13.jar:/maven/net/java/dev/jets3t/jets3t/0.9.0/jets3t-0.9.0.jar:/maven/com/jamesmurty/utils/java-xmlbuilder/0.4/java-xmlbuilder-0.4.jar:/maven/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar:/maven/commons-digester/commons-digester/1.8/commons-digester-1.8.jar:/maven/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar:/maven/commons-beanutils/commons-beanutils-core/1.8.0/commons-beanutils-core-1.8.0.jar:/maven/org/apache/commons/commons-lang3/3.4/commons-lang3-3.4.jar:/maven/org/apache/avro/avro/1.7.7/avro-1.7.7.jar:/maven/com/thoug
 
htworks/paranamer/paranamer/2.3/paranamer-2.3.jar:/maven/org/xerial/snappy/snappy-java/1.0.5/snappy-java-1.0.5.jar:/maven/com/google/code/gson/gson/2.2.4/gson-2.2.4.jar:/maven/com/jcraft/jsch/0.1.54/jsch-0.1.54.jar:/maven/org/apache/curator/curator-client/2.7.1/curator-client-2.7.1.jar:/maven/org/apache/curator/curator-recipes/2.7.1/curator-recipes-2.7.1.jar:/maven/com/google/code/findbugs/jsr305/3.0.0/jsr305-3.0.0.jar:/maven/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar:/maven/org/tukaani/xz/1.0/xz-1.0.jar:/maven/org/codehaus/woodstox/stax2-api/3.1.4/stax2-api-3.1.4.jar:/maven/com/fasterxml/woodstox/woodstox-core/5.0.3/woodstox-core-5.0.3.jar:/build/source/hadoop-hdfs-project/hadoop-hdfs-client/target/hadoop-hdfs-client-2.9.2.jar:/maven/com/squareup/okhttp/okhttp/2.7.5/okhttp-2.7.5.jar:/maven/com/squareup/okio/okio/1.6.0/okio-1.6.0.jar:/maven/com/google/guava/guava/11.0.2/guava-11.0.2.jar:/maven/org/mortbay/jetty/jetty/6.1.26/jetty-6.1.26.jar:/maven/org/mortb
 
ay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar:/maven/com/sun/jersey/jersey-core/1.9/jersey-core-1.9.jar:/maven/com/sun/jersey/jersey-server/1.9/jersey-server-1.9.jar:/maven/asm/asm/3.2/asm-3.2.jar:/maven/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/maven/commons-codec/commons-codec/1.4/commons-codec-1.4.jar:/maven/commons-io/commons-io/2.4/commons-io-2.4.jar:/maven/commons-lang/commons-lang/2.6/commons-lang-2.6.jar:/maven/commons-logging/commons-logging/1.1.3/commons-logging-1.1.3.jar:/maven/commons-daemon/commons-daemon/1.0.13/commons-daemon-1.0.13.jar:/maven/log4j/log4j/1.2.17/log4j-1.2.17.jar:/maven/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar:/maven/javax/servlet/servlet-api/2.5/servlet-api-2.5.jar:/maven/org/slf4j/slf4j-log4j12/1.7.25/slf4j-log4j12-1.7.25.jar:/maven/org/codehaus/jackson/jackson-core-asl/1.9.13/jackson-core-asl-1.9.13.jar:/maven/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar:/maven/xmlenc/xmlenc/0.52/xmlenc
 
-0.52.jar:/maven/io/netty/netty/3.6.2.Final/netty-3.6.2.Final.jar:/maven/io/netty/netty-all/4.0.23.Final/netty-all-4.0.23.Final.jar:/maven/xerces/xercesImpl/2.9.1/xercesImpl-2.9.1.jar:/maven/xml-apis/xml-apis/1.3.04/xml-apis-1.3.04.jar:/maven/org/apache/htrace/htrace-core4/4.1.0-incubating/htrace-core4-4.1.0-incubating.jar:/maven/org/fusesource/leveldbjni/leveldbjni-all/1.8/leveldbjni-all-1.8.jar:/maven/com/fasterxml/jackson/core/jackson-databind/2.7.8/jackson-databind-2.7.8.jar:/maven/com/fasterxml/jackson/core/jackson-annotations/2.7.8/jackson-annotations-2.7.8.jar:/maven/com/fasterxml/jackson/core/jackson-core/2.7.8/jackson-core-2.7.8.jar
 -sourcepath /build/source/hadoop-hdfs-project/hadoop-hdfs/src/main/java 
-apidir /build/source/hadoop-hdfs-project/hadoop-hdfs/target/site/jdiff/xml 
-apiname Apache Hadoop HDFS 2.9.2 -->
+<package name="org.apache.hadoop.hdfs">
+  <doc>
+  <![CDATA[<p>A distributed implementation of {@link
+org.apache.hadoop.fs.FileSystem}.  This is loosely modelled after
+Google's <a href="http://research.google.com/archive/gfs.html";>GFS</a>.</p>
+
+<p>The most important difference is that unlike GFS, Hadoop DFS files 
+have strictly one writer at any one time.  Bytes are always appended 
+to the end of the writer's stream.  There is no notion of "record appends"
+or "mutations" that are then checked or reordered.  Writers simply emit 
+a byte stream.  That byte stream is guaranteed to be stored in the 
+order written.</p>]]>
+  </doc>
+</package>
+<package name="org.apache.hadoop.hdfs.net">
+</package>
+<package name="org.apache.hadoop.hdfs.protocol">
+</package>
+<package name="org.apache.hadoop.hdfs.protocol.datatransfer">
+</package>
+<package name="org.apache.hadoop.hdfs.protocol.datatransfer.sasl">
+</package>
+<package name="org.apache.hadoop.hdfs.protocolPB">
+</package>
+<package name="org.apache.hadoop.hdfs.qjournal.client">
+</package>
+<package name="org.apache.hadoop.hdfs.qjournal.protocol">
+</package>
+<package name="org.apache.hadoop.hdfs.qjournal.protocolPB">
+</package>
+<package name="org.apache.hadoop.hdfs.qjournal.server">
+  <!-- start interface 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeMXBean -->
+  <interface name="JournalNodeMXBean"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="getJournalsStatus" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get status information (e.g., whether formatted) of 
JournalNode's journals.
+ 
+ @return A string presenting status for each journal]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[This is the JMX management interface for JournalNode 
information]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.hdfs.qjournal.server.JournalNodeMXBean 
-->
+</package>
+<package name="org.apache.hadoop.hdfs.security.token.block">
+</package>
+<package name="org.apache.hadoop.hdfs.security.token.delegation">
+</package>
+<package name="org.apache.hadoop.hdfs.server.balancer">
+</package>
+<package name="org.apache.hadoop.hdfs.server.blockmanagement">
+</package>
+<package name="org.apache.hadoop.hdfs.server.common">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.fsdataset">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.fsdataset.impl">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.metrics">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.web">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.web.webhdfs">
+</package>
+<package name="org.apache.hadoop.hdfs.server.mover">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode">
+  <!-- start interface org.apache.hadoop.hdfs.server.namenode.AuditLogger -->
+  <interface name="AuditLogger"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="initialize"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <doc>
+      <![CDATA[Called during initialization of the logger.
+
+ @param conf The configuration object.]]>
+      </doc>
+    </method>
+    <method name="logAuditEvent"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="stat" type="org.apache.hadoop.fs.FileStatus"/>
+      <doc>
+      <![CDATA[Called to log an audit event.
+ <p>
+ This method must return as quickly as possible, since it's called
+ in a critical section of the NameNode's operation.
+
+ @param succeeded Whether authorization succeeded.
+ @param userName Name of the user executing the request.
+ @param addr Remote address of the request.
+ @param cmd The requested command.
+ @param src Path of affected source file.
+ @param dst Path of affected destination file (if any).
+ @param stat File information for operations that change the file's
+             metadata (permissions, owner, times, etc).]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[Interface defining an audit logger.]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.hdfs.server.namenode.AuditLogger -->
+  <!-- start class org.apache.hadoop.hdfs.server.namenode.HdfsAuditLogger -->
+  <class name="HdfsAuditLogger" extends="java.lang.Object"
+    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.hdfs.server.namenode.AuditLogger"/>
+    <constructor name="HdfsAuditLogger"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="logAuditEvent"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="status" type="org.apache.hadoop.fs.FileStatus"/>
+    </method>
+    <method name="logAuditEvent"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="stat" type="org.apache.hadoop.fs.FileStatus"/>
+      <param name="callerContext" type="org.apache.hadoop.ipc.CallerContext"/>
+      <param name="ugi" 
type="org.apache.hadoop.security.UserGroupInformation"/>
+      <param name="dtSecretManager" 
type="org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSecretManager"/>
+      <doc>
+      <![CDATA[Same as
+ {@link #logAuditEvent(boolean, String, InetAddress, String, String, String,
+ FileStatus)} with additional parameters related to logging delegation token
+ tracking IDs.
+ 
+ @param succeeded Whether authorization succeeded.
+ @param userName Name of the user executing the request.
+ @param addr Remote address of the request.
+ @param cmd The requested command.
+ @param src Path of affected source file.
+ @param dst Path of affected destination file (if any).
+ @param stat File information for operations that change the file's metadata
+          (permissions, owner, times, etc).
+ @param callerContext Context information of the caller
+ @param ugi UserGroupInformation of the current user, or null if not logging
+          token tracking information
+ @param dtSecretManager The token secret manager, or null if not logging
+          token tracking information]]>
+      </doc>
+    </method>
+    <method name="logAuditEvent"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="stat" type="org.apache.hadoop.fs.FileStatus"/>
+      <param name="ugi" 
type="org.apache.hadoop.security.UserGroupInformation"/>
+      <param name="dtSecretManager" 
type="org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSecretManager"/>
+      <doc>
+      <![CDATA[Same as
+ {@link #logAuditEvent(boolean, String, InetAddress, String, String,
+ String, FileStatus, CallerContext, UserGroupInformation,
+ DelegationTokenSecretManager)} without {@link CallerContext} information.]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[Extension of {@link AuditLogger}.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.hdfs.server.namenode.HdfsAuditLogger -->
+  <!-- start class 
org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider -->
+  <class name="INodeAttributeProvider" extends="java.lang.Object"
+    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="INodeAttributeProvider"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="start"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Initialize the provider. This method is called at NameNode 
startup
+ time.]]>
+      </doc>
+    </method>
+    <method name="stop"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Shutdown the provider. This method is called at NameNode 
shutdown time.]]>
+      </doc>
+    </method>
+    <method name="getAttributes" 
return="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="fullPath" type="java.lang.String"/>
+      <param name="inode" 
type="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"/>
+    </method>
+    <method name="getAttributes" 
return="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="pathElements" type="java.lang.String[]"/>
+      <param name="inode" 
type="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"/>
+    </method>
+    <method name="getAttributes" 
return="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="components" type="byte[][]"/>
+      <param name="inode" 
type="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"/>
+    </method>
+    <method name="getExternalAccessControlEnforcer" 
return="org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider.AccessControlEnforcer"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="defaultEnforcer" 
type="org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider.AccessControlEnforcer"/>
+      <doc>
+      <![CDATA[Can be over-ridden by implementations to provide a custom 
Access Control
+ Enforcer that can provide an alternate implementation of the
+ default permission checking logic.
+ @param defaultEnforcer The Default AccessControlEnforcer
+ @return The AccessControlEnforcer to use]]>
+      </doc>
+    </method>
+  </class>
+  <!-- end class org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider 
-->
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.ha">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.metrics">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.snapshot">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.top">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.top.metrics">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.top.window">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.web.resources">
+</package>
+<package name="org.apache.hadoop.hdfs.server.protocol">
+</package>
+<package name="org.apache.hadoop.hdfs.tools">
+</package>
+<package name="org.apache.hadoop.hdfs.tools.offlineEditsViewer">
+</package>
+<package name="org.apache.hadoop.hdfs.tools.offlineImageViewer">
+</package>
+<package name="org.apache.hadoop.hdfs.tools.snapshot">
+</package>
+<package name="org.apache.hadoop.hdfs.util">
+</package>
+<package name="org.apache.hadoop.hdfs.web">
+</package>
+<package name="org.apache.hadoop.hdfs.web.resources">
+</package>
+
+</api>


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org

Reply via email to