This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hadoop-ozone.git

commit c8f14a560beb9a83c7d98388614a5ba36d7638f6
Author: Márton Elek <e...@apache.org>
AuthorDate: Sat Oct 12 09:54:15 2019 +0000

    HDDS-2287. Import common utility scripts and txt files from Hadoop without 
history.
---
 BUILDING.txt                      | 511 ++++++++++++++++++++++++++++++++++++++
 LICENSE.txt                       | 258 +++++++++++++++++++
 NOTICE.txt                        |  34 +++
 README.txt                        |   7 +
 dev-support/bin/qbt               |  18 ++
 dev-support/bin/smart-apply-patch |  18 ++
 dev-support/bin/test-patch        |  18 ++
 dev-support/bin/yetus-wrapper     | 188 ++++++++++++++
 dev-support/byteman/README.md     |  31 +++
 dev-support/byteman/hadooprpc.btm |  44 ++++
 10 files changed, 1127 insertions(+)

diff --git a/BUILDING.txt b/BUILDING.txt
new file mode 100644
index 0000000..d3c9a1a
--- /dev/null
+++ b/BUILDING.txt
@@ -0,0 +1,511 @@
+Build instructions for Hadoop
+
+----------------------------------------------------------------------------------
+Requirements:
+
+* Unix System
+* JDK 1.8
+* Maven 3.3 or later
+* Protocol Buffers 3.7.1 (if compiling native code)
+* CMake 3.1 or newer (if compiling native code)
+* Zlib devel (if compiling native code)
+* Cyrus SASL devel (if compiling native code)
+* One of the compilers that support thread_local storage: GCC 4.8.1 or later, 
Visual Studio,
+  Clang (community version), Clang (version for iOS 9 and later) (if compiling 
native code)
+* openssl devel (if compiling native hadoop-pipes and to get the best HDFS 
encryption performance)
+* Linux FUSE (Filesystem in Userspace) version 2.6 or above (if compiling 
fuse_dfs)
+* Doxygen ( if compiling libhdfspp and generating the documents )
+* Internet connection for first build (to fetch all Maven and Hadoop 
dependencies)
+* python (for releasedocs)
+* bats (for shell code testing)
+* Node.js / bower / Ember-cli (for YARN UI v2 building)
+
+----------------------------------------------------------------------------------
+The easiest way to get an environment with all the appropriate tools is by 
means
+of the provided Docker config.
+This requires a recent version of docker (1.4.1 and higher are known to work).
+
+On Linux / Mac:
+    Install Docker and run this command:
+
+    $ ./start-build-env.sh
+
+The prompt which is then presented is located at a mounted version of the 
source tree
+and all required tools for testing and building have been installed and 
configured.
+
+Note that from within this docker environment you ONLY have access to the 
Hadoop source
+tree from where you started. So if you need to run
+    dev-support/bin/test-patch /path/to/my.patch
+then the patch must be placed inside the hadoop source tree.
+
+Known issues:
+- On Mac with Boot2Docker the performance on the mounted directory is 
currently extremely slow.
+  This is a known problem related to boot2docker on the Mac.
+  See:
+    https://github.com/boot2docker/boot2docker/issues/593
+  This issue has been resolved as a duplicate, and they point to a new feature 
for utilizing NFS mounts
+  as the proposed solution:
+    https://github.com/boot2docker/boot2docker/issues/64
+  An alternative solution to this problem is to install Linux native inside a 
virtual machine
+  and run your IDE and Docker etc inside that VM.
+
+----------------------------------------------------------------------------------
+Installing required packages for clean install of Ubuntu 14.04 LTS Desktop:
+
+* Oracle JDK 1.8 (preferred)
+  $ sudo apt-get purge openjdk*
+  $ sudo apt-get install software-properties-common
+  $ sudo add-apt-repository ppa:webupd8team/java
+  $ sudo apt-get update
+  $ sudo apt-get install oracle-java8-installer
+* Maven
+  $ sudo apt-get -y install maven
+* Native libraries
+  $ sudo apt-get -y install build-essential autoconf automake libtool cmake 
zlib1g-dev pkg-config libssl-dev libsasl2-dev
+* Protocol Buffers 3.7.1 (required to build native code)
+  $ mkdir -p /opt/protobuf-3.7-src \
+        && curl -L -s -S \
+          
https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protobuf-java-3.7.1.tar.gz
 \
+          -o /opt/protobuf-3.7.1.tar.gz \
+        && tar xzf /opt/protobuf-3.7.1.tar.gz --strip-components 1 -C 
/opt/protobuf-3.7-src \
+        && cd /opt/protobuf-3.7-src \
+        && ./configure\
+        && make install \
+        && rm -rf /opt/protobuf-3.7-src
+
+Optional packages:
+
+* Snappy compression
+  $ sudo apt-get install snappy libsnappy-dev
+* Intel ISA-L library for erasure coding
+  Please refer to 
https://01.org/intel%C2%AE-storage-acceleration-library-open-source-version
+  (OR https://github.com/01org/isa-l)
+* Bzip2
+  $ sudo apt-get install bzip2 libbz2-dev
+* Linux FUSE
+  $ sudo apt-get install fuse libfuse-dev
+* ZStandard compression
+    $ sudo apt-get install zstd
+* PMDK library for storage class memory(SCM) as HDFS cache backend
+  Please refer to http://pmem.io/ and https://github.com/pmem/pmdk
+
+----------------------------------------------------------------------------------
+Maven main modules:
+
+  hadoop                                (Main Hadoop project)
+         - hadoop-project               (Parent POM for all Hadoop Maven 
modules.             )
+                                        (All plugins & dependencies versions 
are defined here.)
+         - hadoop-project-dist          (Parent POM for modules that generate 
distributions.)
+         - hadoop-annotations           (Generates the Hadoop doclet used to 
generated the Javadocs)
+         - hadoop-assemblies            (Maven assemblies used by the 
different modules)
+         - hadoop-maven-plugins         (Maven plugins used in project)
+         - hadoop-build-tools           (Build tools like checkstyle, etc.)
+         - hadoop-common-project        (Hadoop Common)
+         - hadoop-hdfs-project          (Hadoop HDFS)
+         - hadoop-yarn-project          (Hadoop YARN)
+         - hadoop-mapreduce-project     (Hadoop MapReduce)
+         - hadoop-ozone                 (Hadoop Ozone)
+         - hadoop-hdds                  (Hadoop Distributed Data Store)
+         - hadoop-tools                 (Hadoop tools like Streaming, Distcp, 
etc.)
+         - hadoop-dist                  (Hadoop distribution assembler)
+         - hadoop-client-modules        (Hadoop client modules)
+         - hadoop-minicluster           (Hadoop minicluster artifacts)
+         - hadoop-cloud-storage-project (Generates artifacts to access cloud 
storage like aws, azure, etc.)
+
+----------------------------------------------------------------------------------
+Where to run Maven from?
+
+  It can be run from any module. The only catch is that if not run from utrunk
+  all modules that are not part of the build run must be installed in the local
+  Maven cache or available in a Maven repository.
+
+----------------------------------------------------------------------------------
+Maven build goals:
+
+ * Clean                     : mvn clean [-Preleasedocs]
+ * Compile                   : mvn compile [-Pnative]
+ * Run tests                 : mvn test [-Pnative] [-Pshelltest]
+ * Create JAR                : mvn package
+ * Run findbugs              : mvn compile findbugs:findbugs
+ * Run checkstyle            : mvn compile checkstyle:checkstyle
+ * Install JAR in M2 cache   : mvn install
+ * Deploy JAR to Maven repo  : mvn deploy
+ * Run clover                : mvn test -Pclover 
[-DcloverLicenseLocation=${user.name}/.clover.license]
+ * Run Rat                   : mvn apache-rat:check
+ * Build javadocs            : mvn javadoc:javadoc
+ * Build distribution        : mvn package 
[-Pdist][-Pdocs][-Psrc][-Pnative][-Dtar][-Preleasedocs][-Pyarn-ui]
+ * Change Hadoop version     : mvn versions:set -DnewVersion=NEWVERSION
+
+ Build options:
+
+  * Use -Pnative to compile/bundle native code
+  * Use -Pdocs to generate & bundle the documentation in the distribution 
(using -Pdist)
+  * Use -Psrc to create a project source TAR.GZ
+  * Use -Dtar to create a TAR with the distribution (using -Pdist)
+  * Use -Preleasedocs to include the changelog and release docs (requires 
Internet connectivity)
+  * Use -Pyarn-ui to build YARN UI v2. (Requires Internet connectivity)
+  * Use -DskipShade to disable client jar shading to speed up build times (in
+    development environments only, not to build release artifacts)
+
+ YARN Application Timeline Service V2 build options:
+
+   YARN Timeline Service v.2 chooses Apache HBase as the primary backing 
storage. The supported
+   versions of Apache HBase are 1.2.6 (default) and 2.0.0-beta1.
+
+  * HBase 1.2.6 is used by default to build Hadoop. The official releases are 
ready to use if you
+    plan on running Timeline Service v2 with HBase 1.2.6.
+
+  * Use -Dhbase.profile=2.0 to build Hadoop with HBase 2.0.0-beta1. Provide 
this option if you plan
+    on running Timeline Service v2 with HBase 2.0.
+
+
+ Snappy build options:
+
+   Snappy is a compression library that can be utilized by the native code.
+   It is currently an optional component, meaning that Hadoop can be built with
+   or without this dependency.
+
+  * Use -Drequire.snappy to fail the build if libsnappy.so is not found.
+    If this option is not specified and the snappy library is missing,
+    we silently build a version of libhadoop.so that cannot make use of snappy.
+    This option is recommended if you plan on making use of snappy and want
+    to get more repeatable builds.
+
+  * Use -Dsnappy.prefix to specify a nonstandard location for the libsnappy
+    header files and library files. You do not need this option if you have
+    installed snappy using a package manager.
+  * Use -Dsnappy.lib to specify a nonstandard location for the libsnappy 
library
+    files.  Similarly to snappy.prefix, you do not need this option if you have
+    installed snappy using a package manager.
+  * Use -Dbundle.snappy to copy the contents of the snappy.lib directory into
+    the final tar file. This option requires that -Dsnappy.lib is also given,
+    and it ignores the -Dsnappy.prefix option. If -Dsnappy.lib isn't given, the
+    bundling and building will fail.
+
+
+ ZStandard build options:
+
+   ZStandard is a compression library that can be utilized by the native code.
+   It is currently an optional component, meaning that Hadoop can be built with
+   or without this dependency.
+
+  * Use -Drequire.zstd to fail the build if libzstd.so is not found.
+    If this option is not specified and the zstd library is missing.
+
+  * Use -Dzstd.prefix to specify a nonstandard location for the libzstd
+    header files and library files. You do not need this option if you have
+    installed zstandard using a package manager.
+
+  * Use -Dzstd.lib to specify a nonstandard location for the libzstd library
+    files.  Similarly to zstd.prefix, you do not need this option if you have
+    installed using a package manager.
+
+  * Use -Dbundle.zstd to copy the contents of the zstd.lib directory into
+    the final tar file. This option requires that -Dzstd.lib is also given,
+    and it ignores the -Dzstd.prefix option. If -Dzstd.lib isn't given, the
+    bundling and building will fail.
+
+ OpenSSL build options:
+
+   OpenSSL includes a crypto library that can be utilized by the native code.
+   It is currently an optional component, meaning that Hadoop can be built with
+   or without this dependency.
+
+  * Use -Drequire.openssl to fail the build if libcrypto.so is not found.
+    If this option is not specified and the openssl library is missing,
+    we silently build a version of libhadoop.so that cannot make use of
+    openssl. This option is recommended if you plan on making use of openssl
+    and want to get more repeatable builds.
+  * Use -Dopenssl.prefix to specify a nonstandard location for the libcrypto
+    header files and library files. You do not need this option if you have
+    installed openssl using a package manager.
+  * Use -Dopenssl.lib to specify a nonstandard location for the libcrypto 
library
+    files. Similarly to openssl.prefix, you do not need this option if you have
+    installed openssl using a package manager.
+  * Use -Dbundle.openssl to copy the contents of the openssl.lib directory into
+    the final tar file. This option requires that -Dopenssl.lib is also given,
+    and it ignores the -Dopenssl.prefix option. If -Dopenssl.lib isn't given, 
the
+    bundling and building will fail.
+
+   Tests options:
+
+  * Use -DskipTests to skip tests when running the following Maven goals:
+    'package',  'install', 'deploy' or 'verify'
+  * -Dtest=<TESTCLASSNAME>,<TESTCLASSNAME#METHODNAME>,....
+  * -Dtest.exclude=<TESTCLASSNAME>
+  * -Dtest.exclude.pattern=**/<TESTCLASSNAME1>.java,**/<TESTCLASSNAME2>.java
+  * To run all native unit tests, use: mvn test -Pnative -Dtest=allNative
+  * To run a specific native unit test, use: mvn test -Pnative -Dtest=<test>
+  For example, to run test_bulk_crc32, you would use:
+  mvn test -Pnative -Dtest=test_bulk_crc32
+
+ Intel ISA-L build options:
+
+   Intel ISA-L is an erasure coding library that can be utilized by the native 
code.
+   It is currently an optional component, meaning that Hadoop can be built with
+   or without this dependency. Note the library is used via dynamic module. 
Please
+   reference the official site for the library details.
+   https://01.org/intel%C2%AE-storage-acceleration-library-open-source-version
+   (OR https://github.com/01org/isa-l)
+
+  * Use -Drequire.isal to fail the build if libisal.so is not found.
+    If this option is not specified and the isal library is missing,
+    we silently build a version of libhadoop.so that cannot make use of ISA-L 
and
+    the native raw erasure coders.
+    This option is recommended if you plan on making use of native raw erasure
+    coders and want to get more repeatable builds.
+  * Use -Disal.prefix to specify a nonstandard location for the libisal
+    library files. You do not need this option if you have installed ISA-L to 
the
+    system library path.
+  * Use -Disal.lib to specify a nonstandard location for the libisal library
+    files.
+  * Use -Dbundle.isal to copy the contents of the isal.lib directory into
+    the final tar file. This option requires that -Disal.lib is also given,
+    and it ignores the -Disal.prefix option. If -Disal.lib isn't given, the
+    bundling and building will fail.
+
+ Special plugins: OWASP's dependency-check:
+
+   OWASP's dependency-check plugin will scan the third party dependencies
+   of this project for known CVEs (security vulnerabilities against them).
+   It will produce a report in target/dependency-check-report.html. To
+   invoke, run 'mvn dependency-check:aggregate'. Note that this plugin
+   requires maven 3.1.1 or greater.
+
+ PMDK library build options:
+
+   The Persistent Memory Development Kit (PMDK), formerly known as NVML, is a 
growing
+   collection of libraries which have been developed for various use cases, 
tuned,
+   validated to production quality, and thoroughly documented. These libraries 
are built
+   on the Direct Access (DAX) feature available in both Linux and Windows, 
which allows
+   applications directly load/store access to persistent memory by 
memory-mapping files
+   on a persistent memory aware file system.
+
+   It is currently an optional component, meaning that Hadoop can be built 
without
+   this dependency. Please Note the library is used via dynamic module. For 
getting
+   more details please refer to the official sites:
+   http://pmem.io/ and https://github.com/pmem/pmdk.
+
+  * -Drequire.pmdk is used to build the project with PMDK libraries forcibly. 
With this
+    option provided, the build will fail if libpmem library is not found. If 
this option
+    is not given, the build will generate a version of Hadoop with 
libhadoop.so.
+    And storage class memory(SCM) backed HDFS cache is still supported without 
PMDK involved.
+    Because PMDK can bring better caching write/read performance, it is 
recommended to build
+    the project with this option if user plans to use SCM backed HDFS cache.
+  * -Dpmdk.lib is used to specify a nonstandard location for PMDK libraries if 
they are not
+    under /usr/lib or /usr/lib64.
+  * -Dbundle.pmdk is used to copy the specified libpmem libraries into the 
distribution tar
+    package. This option requires that -Dpmdk.lib is specified. With 
-Dbundle.pmdk provided,
+    the build will fail if -Dpmdk.lib is not specified.
+
+----------------------------------------------------------------------------------
+Building components separately
+
+If you are building a submodule directory, all the hadoop dependencies this
+submodule has will be resolved as all other 3rd party dependencies. This is,
+from the Maven cache or from a Maven repository (if not available in the cache
+or the SNAPSHOT 'timed out').
+An alternative is to run 'mvn install -DskipTests' from Hadoop source top
+level once; and then work from the submodule. Keep in mind that SNAPSHOTs
+time out after a while, using the Maven '-nsu' will stop Maven from trying
+to update SNAPSHOTs from external repos.
+
+----------------------------------------------------------------------------------
+Importing projects to eclipse
+
+When you import the project to eclipse, install hadoop-maven-plugins at first.
+
+  $ cd hadoop-maven-plugins
+  $ mvn install
+
+Then, generate eclipse project files.
+
+  $ mvn eclipse:eclipse -DskipTests
+
+At last, import to eclipse by specifying the root directory of the project via
+[File] > [Import] > [Existing Projects into Workspace].
+
+----------------------------------------------------------------------------------
+Building distributions:
+
+Create binary distribution without native code and without documentation:
+
+  $ mvn package -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true
+
+Create binary distribution with native code and with documentation:
+
+  $ mvn package -Pdist,native,docs -DskipTests -Dtar
+
+Create source distribution:
+
+  $ mvn package -Psrc -DskipTests
+
+Create source and binary distributions with native code and documentation:
+
+  $ mvn package -Pdist,native,docs,src -DskipTests -Dtar
+
+Create a local staging version of the website (in /tmp/hadoop-site)
+
+  $ mvn clean site -Preleasedocs; mvn site:stage 
-DstagingDirectory=/tmp/hadoop-site
+
+Note that the site needs to be built in a second pass after other artifacts.
+
+----------------------------------------------------------------------------------
+Installing Hadoop
+
+Look for these HTML files after you build the document by the above commands.
+
+  * Single Node Setup:
+    hadoop-project-dist/hadoop-common/SingleCluster.html
+
+  * Cluster Setup:
+    hadoop-project-dist/hadoop-common/ClusterSetup.html
+
+----------------------------------------------------------------------------------
+
+Handling out of memory errors in builds
+
+----------------------------------------------------------------------------------
+
+If the build process fails with an out of memory error, you should be able to 
fix
+it by increasing the memory used by maven which can be done via the environment
+variable MAVEN_OPTS.
+
+Here is an example setting to allocate between 256 MB and 1.5 GB of heap space 
to
+Maven
+
+export MAVEN_OPTS="-Xms256m -Xmx1536m"
+
+----------------------------------------------------------------------------------
+
+Building on macOS (without Docker)
+
+----------------------------------------------------------------------------------
+Installing required dependencies for clean install of macOS 10.14:
+
+* Install Xcode Command Line Tools
+  $ xcode-select --install
+* Install Homebrew
+  $ /usr/bin/ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Homebrew/install/master/install)"
+* Install OpenJDK 8
+  $ brew tap AdoptOpenJDK/openjdk
+  $ brew cask install adoptopenjdk8
+* Install maven and tools
+  $ brew install maven autoconf automake cmake wget
+* Install native libraries, only openssl is required to compile native code,
+you may optionally install zlib, lz4, etc.
+  $ brew install openssl
+* Protocol Buffers 3.7.1 (required to compile native code)
+  $ wget 
https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protobuf-java-3.7.1.tar.gz
+  $ mkdir -p protobuf-3.7 && tar zxvf protobuf-java-3.7.1.tar.gz 
--strip-components 1 -C protobuf-3.7
+  $ cd protobuf-3.7
+  $ ./configure
+  $ make
+  $ make check
+  $ make install
+  $ protoc --version
+
+Note that building Hadoop 3.1.1/3.1.2/3.2.0 native code from source is broken
+on macOS. For 3.1.1/3.1.2, you need to manually backport YARN-8622. For 3.2.0,
+you need to backport both YARN-8622 and YARN-9487 in order to build native 
code.
+
+----------------------------------------------------------------------------------
+Building command example:
+
+* Create binary distribution with native code but without documentation:
+  $ mvn package -Pdist,native -DskipTests -Dmaven.javadoc.skip \
+    -Dopenssl.prefix=/usr/local/opt/openssl
+
+Note that the command above manually specified the openssl library and include
+path. This is necessary at least for Homebrewed OpenSSL.
+
+----------------------------------------------------------------------------------
+
+Building on Windows
+
+----------------------------------------------------------------------------------
+Requirements:
+
+* Windows System
+* JDK 1.8
+* Maven 3.0 or later
+* Protocol Buffers 3.7.1
+* CMake 3.1 or newer
+* Visual Studio 2010 Professional or Higher
+* Windows SDK 8.1 (if building CPU rate control for the container executor)
+* zlib headers (if building native code bindings for zlib)
+* Internet connection for first build (to fetch all Maven and Hadoop 
dependencies)
+* Unix command-line tools from GnuWin32: sh, mkdir, rm, cp, tar, gzip. These
+  tools must be present on your PATH.
+* Python ( for generation of docs using 'mvn site')
+
+Unix command-line tools are also included with the Windows Git package which
+can be downloaded from http://git-scm.com/downloads
+
+If using Visual Studio, it must be Professional level or higher.
+Do not use Visual Studio Express.  It does not support compiling for 64-bit,
+which is problematic if running a 64-bit system.
+
+The Windows SDK 8.1 is available to download at:
+
+http://msdn.microsoft.com/en-us/windows/bg162891.aspx
+
+Cygwin is not required.
+
+----------------------------------------------------------------------------------
+Building:
+
+Keep the source code tree in a short path to avoid running into problems 
related
+to Windows maximum path length limitation (for example, C:\hdc).
+
+There is one support command file located in dev-support called 
win-paths-eg.cmd.
+It should be copied somewhere convenient and modified to fit your needs.
+
+win-paths-eg.cmd sets up the environment for use. You will need to modify this
+file. It will put all of the required components in the command path,
+configure the bit-ness of the build, and set several optional components.
+
+Several tests require that the user must have the Create Symbolic Links
+privilege.
+
+All Maven goals are the same as described above with the exception that
+native code is built by enabling the 'native-win' Maven profile. -Pnative-win
+is enabled by default when building on Windows since the native components
+are required (not optional) on Windows.
+
+If native code bindings for zlib are required, then the zlib headers must be
+deployed on the build machine. Set the ZLIB_HOME environment variable to the
+directory containing the headers.
+
+set ZLIB_HOME=C:\zlib-1.2.7
+
+At runtime, zlib1.dll must be accessible on the PATH. Hadoop has been tested
+with zlib 1.2.7, built using Visual Studio 2010 out of contrib\vstudio\vc10 in
+the zlib 1.2.7 source tree.
+
+http://www.zlib.net/
+
+----------------------------------------------------------------------------------
+Building distributions:
+
+ * Build distribution with native code    : mvn package 
[-Pdist][-Pdocs][-Psrc][-Dtar][-Dmaven.javadoc.skip=true]
+
+----------------------------------------------------------------------------------
+Running compatibility checks with checkcompatibility.py
+
+Invoke `./dev-support/bin/checkcompatibility.py` to run Java API Compliance 
Checker
+to compare the public Java APIs of two git objects. This can be used by release
+managers to compare the compatibility of a previous and current release.
+
+As an example, this invocation will check the compatibility of interfaces 
annotated as Public or LimitedPrivate:
+
+./dev-support/bin/checkcompatibility.py --annotation 
org.apache.hadoop.classification.InterfaceAudience.Public --annotation 
org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate --include 
"hadoop.*" branch-2.7.2 trunk
+
+----------------------------------------------------------------------------------
+Changing the Hadoop version declared returned by VersionInfo
+
+If for compatibility reasons the version of Hadoop has to be declared as a 2.x 
release in the information returned by
+org.apache.hadoop.util.VersionInfo, set the property declared.hadoop.version 
to the desired version.
+For example: mvn package -Pdist -Ddeclared.hadoop.version=2.11
+
+If unset, the project version declared in the POM file is used.
diff --git a/LICENSE.txt b/LICENSE.txt
new file mode 100644
index 0000000..d0d5746
--- /dev/null
+++ b/LICENSE.txt
@@ -0,0 +1,258 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+--------------------------------------------------------------------------------
+This product bundles various third-party components under other open source
+licenses. This section summarizes those components and their licenses.
+See licenses/ for text of these licenses.
+
+
+Apache Software Foundation License 2.0
+--------------------------------------
+
+hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/nvd3-1.8.5.* (css and 
js files)
+hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/AbstractFuture.java
+hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/TimeoutFuture.java
+
+
+BSD 2-Clause
+------------
+
+hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/{lz4.h,lz4.c,lz4hc.h,lz4hc.c}
+hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/util/tree.h
+hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/compat/{fstatat|openat|unlinkat}.h
+
+
+BSD 3-Clause
+------------
+
+hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/bloom/*
+hadoop-common-project/hadoop-common/src/main/native/gtest/gtest-all.cc
+hadoop-common-project/hadoop-common/src/main/native/gtest/include/gtest/gtest.h
+hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32_x86.c
+hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/d3.v3.js
+hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/d3-3.5.17.min.js
+
+
+MIT License
+-----------
+
+hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/angular-1.6.4.min.js
+hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/angular-nvd3-1.0.9.min.js
+hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/angular-route-1.6.4.min.js
+hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1
+hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.css
+hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.js
+hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dust-full-2.0.0.min.js
+hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dust-helpers-1.1.1.min.js
+hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery-3.4.1.min.js
+hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js
+hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/moment.min.js
+hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/bootstrap.min.js
+hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/jquery.js
+hadoop-tools/hadoop-sls/src/main/html/css/bootstrap.min.css
+hadoop-tools/hadoop-sls/src/main/html/css/bootstrap-responsive.min.css
+hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/*
+hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery
+hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js
+hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/TERMINAL
diff --git a/NOTICE.txt b/NOTICE.txt
new file mode 100644
index 0000000..f6715f7
--- /dev/null
+++ b/NOTICE.txt
@@ -0,0 +1,34 @@
+Apache Hadoop
+Copyright 2006 and onwards The Apache Software Foundation.
+
+This product includes software developed at
+The Apache Software Foundation (http://www.apache.org/).
+
+Export Control Notice
+---------------------
+
+This distribution includes cryptographic software.  The country in
+which you currently reside may have restrictions on the import,
+possession, use, and/or re-export to another country, of
+encryption software.  BEFORE using any encryption software, please
+check your country's laws, regulations and policies concerning the
+import, possession, or use, and re-export of encryption software, to
+see if this is permitted.  See <http://www.wassenaar.org/> for more
+information.
+
+The U.S. Government Department of Commerce, Bureau of Industry and
+Security (BIS), has classified this software as Export Commodity
+Control Number (ECCN) 5D002.C.1, which includes information security
+software using or performing cryptographic functions with asymmetric
+algorithms.  The form and manner of this Apache Software Foundation
+distribution makes it eligible for export under the License Exception
+ENC Technology Software Unrestricted (TSU) exception (see the BIS
+Export Administration Regulations, Section 740.13) for both object
+code and source code.
+
+The following provides more details on the included cryptographic software:
+
+This software uses the SSL libraries from the Jetty project written
+by mortbay.org.
+Hadoop Yarn Server Web Proxy uses the BouncyCastle Java
+cryptography APIs written by the Legion of the Bouncy Castle Inc.
diff --git a/README.txt b/README.txt
new file mode 100644
index 0000000..8d37cc9
--- /dev/null
+++ b/README.txt
@@ -0,0 +1,7 @@
+For the latest information about Hadoop, please visit our website at:
+
+   http://hadoop.apache.org/
+
+and our wiki, at:
+
+   https://cwiki.apache.org/confluence/display/HADOOP/
diff --git a/dev-support/bin/qbt b/dev-support/bin/qbt
new file mode 100755
index 0000000..fe5e6f6
--- /dev/null
+++ b/dev-support/bin/qbt
@@ -0,0 +1,18 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+BINDIR=$(cd -P -- "$(dirname -- "${BASH_SOURCE-0}")" >/dev/null && pwd -P)
+exec "${BINDIR}/yetus-wrapper" qbt --project=hadoop --skip-dir=dev-support "$@"
diff --git a/dev-support/bin/smart-apply-patch 
b/dev-support/bin/smart-apply-patch
new file mode 100755
index 0000000..3fd469f
--- /dev/null
+++ b/dev-support/bin/smart-apply-patch
@@ -0,0 +1,18 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+BINDIR=$(cd -P -- "$(dirname -- "${BASH_SOURCE-0}")" >/dev/null && pwd -P)
+exec "${BINDIR}/yetus-wrapper" smart-apply-patch --project=hadoop "$@"
diff --git a/dev-support/bin/test-patch b/dev-support/bin/test-patch
new file mode 100755
index 0000000..8ff8119
--- /dev/null
+++ b/dev-support/bin/test-patch
@@ -0,0 +1,18 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+BINDIR=$(cd -P -- "$(dirname -- "${BASH_SOURCE-0}")" >/dev/null && pwd -P)
+exec "${BINDIR}/yetus-wrapper" test-patch --project=hadoop 
--skip-dir=dev-support "$@"
diff --git a/dev-support/bin/yetus-wrapper b/dev-support/bin/yetus-wrapper
new file mode 100755
index 0000000..b0f71f1
--- /dev/null
+++ b/dev-support/bin/yetus-wrapper
@@ -0,0 +1,188 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# you must be this high to ride the ride
+if [[ -z "${BASH_VERSINFO[0]}" ]] \
+   || [[ "${BASH_VERSINFO[0]}" -lt 3 ]] \
+   || [[ "${BASH_VERSINFO[0]}" -eq 3 && "${BASH_VERSINFO[1]}" -lt 2 ]]; then
+  echo "bash v3.2+ is required. Sorry."
+  exit 1
+fi
+
+set -o pipefail
+
+## @description  Print a message to stderr
+## @audience     public
+## @stability    stable
+## @replaceable  no
+## @param        string
+function yetus_error
+{
+  echo "$*" 1>&2
+}
+
+## @description  Given a filename or dir, return the absolute version of it
+## @audience     public
+## @stability    stable
+## @param        directory
+## @replaceable  no
+## @return       0 success
+## @return       1 failure
+## @return       stdout abspath
+function yetus_abs
+{
+  declare obj=$1
+  declare dir
+  declare fn
+  declare dirret
+
+  if [[ ! -e ${obj} ]]; then
+    return 1
+  elif [[ -d ${obj} ]]; then
+    dir=${obj}
+  else
+    dir=$(dirname -- "${obj}")
+    fn=$(basename -- "${obj}")
+    fn="/${fn}"
+  fi
+
+  dir=$(cd -P -- "${dir}" >/dev/null 2>/dev/null && pwd -P)
+  dirret=$?
+  if [[ ${dirret} = 0 ]]; then
+    echo "${dir}${fn}"
+    return 0
+  fi
+  return 1
+}
+
+function version_ge()
+{
+  test "$(echo "$@" | tr " " "\n" | sort -rV | head -n 1)" == "$1";
+}
+
+WANTED="$1"
+shift
+ARGV=("$@")
+
+HADOOP_YETUS_VERSION=${HADOOP_YETUS_VERSION:-0.10.0}
+BIN=$(yetus_abs "${BASH_SOURCE-$0}")
+BINDIR=$(dirname "${BIN}")
+
+## HADOOP_YETUS_VERSION >= 0.9.0 the tarball named with apache-yetus prefix
+if version_ge "${HADOOP_YETUS_VERSION}" "0.9.0"; then
+  YETUS_PREFIX=apache-yetus
+else
+  YETUS_PREFIX=yetus
+fi
+
+###
+###  if YETUS_HOME is set, then try to use it
+###
+if [[ -n "${YETUS_HOME}" && -x "${YETUS_HOME}/bin/${WANTED}" ]]; then
+  exec "${YETUS_HOME}/bin/${WANTED}" "${ARGV[@]}"
+fi
+
+#
+# this directory is ignored by git and maven
+#
+HADOOP_PATCHPROCESS=${HADOOP_PATCHPROCESS:-"${BINDIR}/../../patchprocess"}
+
+if [[ ! -d "${HADOOP_PATCHPROCESS}" ]]; then
+  mkdir -p "${HADOOP_PATCHPROCESS}"
+fi
+
+mytmpdir=$(yetus_abs "${HADOOP_PATCHPROCESS}")
+ret=$?
+if [[ ${ret} != 0 ]]; then
+  yetus_error "yetus-dl: Unable to cwd to ${HADOOP_PATCHPROCESS}"
+  exit 1
+fi
+HADOOP_PATCHPROCESS=${mytmpdir}
+
+##
+## if we've already DL'd it, then short cut
+##
+if [[ -x 
"${HADOOP_PATCHPROCESS}/${YETUS_PREFIX}-${HADOOP_YETUS_VERSION}/bin/${WANTED}" 
]]; then
+  exec 
"${HADOOP_PATCHPROCESS}/${YETUS_PREFIX}-${HADOOP_YETUS_VERSION}/bin/${WANTED}" 
"${ARGV[@]}"
+fi
+
+##
+## need to DL, etc
+##
+
+BASEURL="https://archive.apache.org/dist/yetus/${HADOOP_YETUS_VERSION}/";
+TARBALL="${YETUS_PREFIX}-${HADOOP_YETUS_VERSION}-bin.tar"
+
+GPGBIN=$(command -v gpg)
+CURLBIN=$(command -v curl)
+
+if ! pushd "${HADOOP_PATCHPROCESS}" >/dev/null; then
+  yetus_error "ERROR: yetus-dl: Cannot pushd to ${HADOOP_PATCHPROCESS}"
+  exit 1
+fi
+
+if [[ -n "${CURLBIN}" ]]; then
+  if ! "${CURLBIN}" -f -s -L -O "${BASEURL}/${TARBALL}.gz"; then
+    yetus_error "ERROR: yetus-dl: unable to download ${BASEURL}/${TARBALL}.gz"
+    exit 1
+  fi
+else
+  yetus_error "ERROR: yetus-dl requires curl."
+  exit 1
+fi
+
+if [[ -n "${GPGBIN}" ]]; then
+  if ! mkdir -p .gpg; then
+    yetus_error "ERROR: yetus-dl: Unable to create ${HADOOP_PATCHPROCESS}/.gpg"
+    exit 1
+  fi
+  if ! chmod -R 700 .gpg; then
+    yetus_error "ERROR: yetus-dl: Unable to chmod ${HADOOP_PATCHPROCESS}/.gpg"
+    exit 1
+  fi
+  if ! "${CURLBIN}" -s -L -o KEYS_YETUS 
https://dist.apache.org/repos/dist/release/yetus/KEYS; then
+    yetus_error "ERROR: yetus-dl: unable to fetch 
https://dist.apache.org/repos/dist/release/yetus/KEYS";
+    exit 1
+  fi
+  if ! "${CURLBIN}" -s -L -O "${BASEURL}/${TARBALL}.gz.asc"; then
+    yetus_error "ERROR: yetus-dl: unable to fetch ${BASEURL}/${TARBALL}.gz.asc"
+    exit 1
+  fi
+  if ! "${GPGBIN}" --homedir "${HADOOP_PATCHPROCESS}/.gpg" --import 
"${HADOOP_PATCHPROCESS}/KEYS_YETUS" >/dev/null 2>&1; then
+    yetus_error "ERROR: yetus-dl: gpg unable to import 
${HADOOP_PATCHPROCESS}/KEYS_YETUS"
+    exit 1
+  fi
+  if ! "${GPGBIN}" --homedir "${HADOOP_PATCHPROCESS}/.gpg" --verify 
"${TARBALL}.gz.asc" >/dev/null 2>&1; then
+     yetus_error "ERROR: yetus-dl: gpg verify of tarball in 
${HADOOP_PATCHPROCESS} failed"
+     exit 1
+   fi
+fi
+
+if ! (gunzip -c "${TARBALL}.gz" | tar xpf -); then
+  yetus_error "ERROR: ${TARBALL}.gz is corrupt. Investigate and then remove 
${HADOOP_PATCHPROCESS} to try again."
+  exit 1
+fi
+
+if [[ -x 
"${HADOOP_PATCHPROCESS}/${YETUS_PREFIX}-${HADOOP_YETUS_VERSION}/bin/${WANTED}" 
]]; then
+  popd >/dev/null
+  exec 
"${HADOOP_PATCHPROCESS}/${YETUS_PREFIX}-${HADOOP_YETUS_VERSION}/bin/${WANTED}" 
"${ARGV[@]}"
+fi
+
+##
+## give up
+##
+yetus_error "ERROR: ${WANTED} is not part of Apache Yetus 
${HADOOP_YETUS_VERSION}"
+exit 1
diff --git a/dev-support/byteman/README.md b/dev-support/byteman/README.md
new file mode 100644
index 0000000..9a17fc5
--- /dev/null
+++ b/dev-support/byteman/README.md
@@ -0,0 +1,31 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+This folder contains example byteman scripts (http://byteman.jboss.org/) to 
help 
+Hadoop debuging.
+
+As the startup script of the hadoop-runner docker image supports byteman 
+instrumentation it's enough to set the URL of a script to a specific 
environment
+variable to activate it with the docker runs:
+
+
+```
+BYTEMAN_SCRIPT_URL=https://raw.githubusercontent.com/apache/hadoop/trunk/dev-support/byteman/hadooprpc.btm
+```
+
+For more info see HADOOP-15656 and HDDS-342
+
diff --git a/dev-support/byteman/hadooprpc.btm 
b/dev-support/byteman/hadooprpc.btm
new file mode 100644
index 0000000..13894fe
--- /dev/null
+++ b/dev-support/byteman/hadooprpc.btm
@@ -0,0 +1,44 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#
+# This script instruments hadoop rpc layer to print out all the 
request/response messages to the standard output.
+#
+
+RULE Hadoop RPC request
+INTERFACE ^com.google.protobuf.BlockingService
+METHOD callBlockingMethod
+IF true
+DO traceln("--> RPC message request: " + $3.getClass().getSimpleName() + " 
from " + linked(Thread.currentThread(), "source")); 
+   traceln($3.toString())
+ENDRULE
+
+
+RULE Hadoop RPC response
+INTERFACE ^com.google.protobuf.BlockingService
+METHOD callBlockingMethod
+AT EXIT
+IF true
+DO traceln("--> RPC message response: " + $3.getClass().getSimpleName() + " to 
" + unlink(Thread.currentThread(), "source")); 
+   traceln($!.toString())
+ENDRULE
+
+
+RULE Hadoop RPC source IP
+CLASS org.apache.hadoop.ipc.Server$RpcCall
+METHOD run
+IF true
+DO link(Thread.currentThread(), "source", $0.connection.toString())
+ENDRULE


---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-commits-h...@hadoop.apache.org

Reply via email to