[hadoop] branch trunk updated: YARN-10067. Add dry-run feature to FS-CS converter tool. Contributed by Peter Bacsko

2020-01-12 Thread snemeth
This is an automated email from the ASF dual-hosted git repository.

snemeth pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 24e6a9e  YARN-10067. Add dry-run feature to FS-CS converter tool. 
Contributed by Peter Bacsko
24e6a9e is described below

commit 24e6a9e43a210cdecaa8e87926eef09c869988f9
Author: Szilard Nemeth 
AuthorDate: Fri Jan 10 21:14:07 2020 +0100

YARN-10067. Add dry-run feature to FS-CS converter tool. Contributed by 
Peter Bacsko
---
 .../fair/converter/ConversionOptions.java  |  80 +
 .../fair/converter/DryRunResultHolder.java |  80 +
 .../FSConfigToCSConfigArgumentHandler.java |  82 +++---
 .../converter/FSConfigToCSConfigConverter.java |  26 +++--
 .../converter/FSConfigToCSConfigConverterMain.java |   6 +-
 .../converter/FSConfigToCSConfigRuleHandler.java   |  29 ++---
 .../scheduler/fair/converter/FSQueueConverter.java |  60 +-
 .../fair/converter/FSQueueConverterBuilder.java| 100 +
 .../TestFSConfigToCSConfigArgumentHandler.java |  89 +--
 .../converter/TestFSConfigToCSConfigConverter.java |  13 ++-
 .../TestFSConfigToCSConfigRuleHandler.java |  63 ++-
 .../fair/converter/TestFSQueueConverter.java   | 125 +++--
 12 files changed, 623 insertions(+), 130 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/ConversionOptions.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/ConversionOptions.java
new file mode 100644
index 000..c116232
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/ConversionOptions.java
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.converter;
+
+import org.slf4j.Logger;
+
+public class ConversionOptions {
+  private DryRunResultHolder dryRunResultHolder;
+  private boolean dryRun;
+
+  public ConversionOptions(DryRunResultHolder dryRunResultHolder,
+  boolean dryRun) {
+this.dryRunResultHolder = dryRunResultHolder;
+this.dryRun = dryRun;
+  }
+
+  public void setDryRun(boolean dryRun) {
+this.dryRun = dryRun;
+  }
+
+  public void handleWarning(String msg, Logger log) {
+if (dryRun) {
+  dryRunResultHolder.addDryRunWarning(msg);
+} else {
+  log.warn(msg);
+}
+  }
+
+  public void handleError(String msg) {
+if (dryRun) {
+  dryRunResultHolder.addDryRunError(msg);
+} else {
+  throw new UnsupportedPropertyException(msg);
+}
+  }
+
+  public void handleConversionError(String msg) {
+if (dryRun) {
+  dryRunResultHolder.addDryRunError(msg);
+} else {
+  throw new ConversionException(msg);
+}
+  }
+
+  public void handlePreconditionError(String msg) {
+if (dryRun) {
+  dryRunResultHolder.addDryRunError(msg);
+} else {
+  throw new PreconditionException(msg);
+}
+  }
+
+  public void handleParsingFinished() {
+if (dryRun) {
+  dryRunResultHolder.printDryRunResults();
+}
+  }
+
+  public void handleGenericException(Exception e, String msg) {
+if (dryRun) {
+  dryRunResultHolder.addDryRunError(msg);
+} else {
+  FSConfigToCSConfigArgumentHandler.logAndStdErr(e, msg);
+}
+  }
+}
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/DryRunResultHolder.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/DryRunResultHolder.java
new file mode 100644
index 000..0533e85
--- /dev/

[hadoop] branch branch-3.2 updated: YARN-10026. Pull out common code pieces from ATS v1.5 and v2. Contributed by Adam Antal

2020-01-12 Thread snemeth
This is an automated email from the ASF dual-hosted git repository.

snemeth pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 6a7dfb3  YARN-10026. Pull out common code pieces from ATS v1.5 and v2. 
Contributed by Adam Antal
6a7dfb3 is described below

commit 6a7dfb3bf321b897e52ec14e618c3f5b7b855780
Author: Szilard Nemeth 
AuthorDate: Sun Jan 12 13:54:08 2020 +0100

YARN-10026. Pull out common code pieces from ATS v1.5 and v2. Contributed 
by Adam Antal
---
 .../webapp/AHSWebServices.java | 208 ++---
 .../webapp/TestAHSWebServices.java |  25 +-
 .../hadoop/yarn/server/webapp/AppInfoProvider.java |  54 +
 .../hadoop/yarn/server/webapp/BasicAppInfo.java|  47 
 .../hadoop/yarn/server/webapp/LogServlet.java  | 260 +
 .../hadoop/yarn/server/webapp/LogWebService.java   | 247 +++-
 .../hadoop/yarn/server/webapp/WebServices.java |  33 ++-
 .../hadoop/yarn/server/webapp/package-info.java|  18 ++
 .../yarn/server/webapp/TestLogWebService.java  |  23 +-
 9 files changed, 481 insertions(+), 434 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java
index d94605f..607b88b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java
@@ -32,25 +32,18 @@ import javax.ws.rs.QueryParam;
 import javax.ws.rs.core.Context;
 import javax.ws.rs.core.MediaType;
 import javax.ws.rs.core.Response;
-import javax.ws.rs.core.Response.ResponseBuilder;
-import javax.ws.rs.core.Response.Status;
 
 import com.google.common.annotations.VisibleForTesting;
-import com.sun.jersey.api.client.ClientHandlerException;
-import com.sun.jersey.api.client.UniformInterfaceException;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceAudience.Public;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.http.JettyUtils;
 import org.apache.hadoop.util.StringUtils;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
-import org.apache.hadoop.yarn.api.records.ContainerId;
 import org.apache.hadoop.yarn.api.records.YarnApplicationState;
 import org.apache.hadoop.yarn.api.ApplicationBaseProtocol;
 import org.apache.hadoop.yarn.api.records.timeline.TimelineAbout;
-import 
org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileControllerFactory;
-import org.apache.hadoop.yarn.server.webapp.LogWebServiceUtils;
+import org.apache.hadoop.yarn.server.webapp.LogServlet;
 import org.apache.hadoop.yarn.server.webapp.WebServices;
 import org.apache.hadoop.yarn.server.webapp.YarnWebServiceParams;
 import org.apache.hadoop.yarn.server.webapp.dao.AppAttemptInfo;
@@ -61,33 +54,20 @@ import 
org.apache.hadoop.yarn.server.webapp.dao.ContainerInfo;
 import org.apache.hadoop.yarn.server.webapp.dao.ContainersInfo;
 import org.apache.hadoop.yarn.util.timeline.TimelineUtils;
 import org.apache.hadoop.yarn.webapp.BadRequestException;
-import org.apache.hadoop.yarn.webapp.NotFoundException;
-import com.google.common.base.Joiner;
 import com.google.inject.Inject;
 import com.google.inject.Singleton;
-import org.codehaus.jettison.json.JSONException;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
 
 @Singleton
 @Path("/ws/v1/applicationhistory")
 public class AHSWebServices extends WebServices {
 
-  private static final Logger LOG = LoggerFactory
-  .getLogger(AHSWebServices.class);
-  private static final String NM_DOWNLOAD_URI_STR =
-  "/ws/v1/node/containers";
-  private static final Joiner JOINER = Joiner.on("");
-  private static final Joiner DOT_JOINER = Joiner.on(". ");
-  private final Configuration conf;
-  private final LogAggregationFileControllerFactory factory;
+  private LogServlet logServlet;
 
   @Inject
   public AHSWebServices(ApplicationBaseProtocol appBaseProt,
   Configuration conf) {
 super(appBaseProt);
-this.conf = conf;
-this.factory = new LogAggregationFileControllerFactory(conf);
+this.logServlet = new LogServlet(conf, this);
   }
 
   @GET
@@ -242,89 +222,9 @@ public class AHSWebServices extends WebServices {

[hadoop] branch trunk updated: YARN-9866. u:user2:%primary_group is not working as expected. Contributed by Manikandan R

2020-01-12 Thread snemeth
This is an automated email from the ASF dual-hosted git repository.

snemeth pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d842dff  YARN-9866. u:user2:%primary_group is not working as expected. 
Contributed by Manikandan R
d842dff is described below

commit d842dfffa53c8b565f3d65af44ccd7e1cc706733
Author: Szilard Nemeth 
AuthorDate: Sun Jan 12 14:04:12 2020 +0100

YARN-9866. u:user2:%primary_group is not working as expected. Contributed 
by Manikandan R
---
 .../placement/UserGroupMappingPlacementRule.java   |   6 +-
 .../TestCapacitySchedulerQueueMappingFactory.java  | 206 +++--
 2 files changed, 152 insertions(+), 60 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/UserGroupMappingPlacementRule.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/UserGroupMappingPlacementRule.java
index d69272d..0caa602 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/UserGroupMappingPlacementRule.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/UserGroupMappingPlacementRule.java
@@ -220,7 +220,11 @@ public class UserGroupMappingPlacementRule extends 
PlacementRule {
   }
 }
 if (user.equals(mapping.source)) {
-  return getPlacementContext(mapping);
+  if (mapping.queue.equals(PRIMARY_GROUP_MAPPING)) {
+return getPlacementContext(mapping, groups.getGroups(user).get(0));
+  } else {
+return getPlacementContext(mapping);
+  }
 }
   }
   if (mapping.type == MappingType.GROUP) {
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerQueueMappingFactory.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerQueueMappingFactory.java
index 6ee9a7b..4cec544 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerQueueMappingFactory.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerQueueMappingFactory.java
@@ -51,8 +51,6 @@ public class TestCapacitySchedulerQueueMappingFactory {
   public static final String USER = "user_";
   public static final String PARENT_QUEUE = "c";
 
-  private MockRM mockRM = null;
-
   public static CapacitySchedulerConfiguration setupQueueMappingsForRules(
   CapacitySchedulerConfiguration conf, String parentQueue,
   boolean overrideWithQueueMappings, int[] sourceIds) {
@@ -114,23 +112,30 @@ public class TestCapacitySchedulerQueueMappingFactory {
 // init queue mapping for UserGroupMappingRule and AppNameMappingRule
 setupQueueMappingsForRules(conf, PARENT_QUEUE, true, new int[] {1, 2, 3});
 
-mockRM = new MockRM(conf);
-CapacityScheduler cs = (CapacityScheduler) mockRM.getResourceScheduler();
-cs.updatePlacementRules();
-mockRM.start();
-cs.start();
-
-List rules = cs.getRMContext()
-.getQueuePlacementManager().getPlacementRules();
-
-List placementRuleNames = new ArrayList<>();
-for (PlacementRule pr : rules) {
-  placementRuleNames.add(pr.getName());
+MockRM mockRM = null;
+try {
+  mockRM = new MockRM(conf);
+  CapacityScheduler cs = (CapacityScheduler) mockRM.getResourceScheduler();
+  cs.updatePlacementRules();
+  mockRM.start();
+  cs.start();
+
+  List rules = cs.getRMContext()
+  .getQueuePlacementManager().getPlacementRules();
+
+  List placementRuleNames = new ArrayList<>();
+  for (PlacementRule pr : rules) {
+placementRuleNames.add(pr.getName());
+  }
+
+  // verify both placement rules were added successfully
+  assertThat(placementRuleNames, hasItems(QUEUE_MAPPING_RULE_USER_GROUP));
+  assertThat(placementRuleNames, hasItems(QUEUE_MAPPING_RULE_APP_NAME));
+} finally {
+  if(mockRM != null) {
+mockRM.close();
+  }
 }
-
-// verify both placement rules were added successfully
-assertThat(placementRuleNames, hasItems(QUEUE_MAPPING_RULE_USER_GROUP))

[hadoop] branch trunk updated: HADOOP-16797. Add Dockerfile for ARM builds. Contributed by Vinayakumar B. (#1801)

2020-01-12 Thread vinayakumarb
This is an automated email from the ASF dual-hosted git repository.

vinayakumarb pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 52b360a  HADOOP-16797. Add Dockerfile for ARM builds. Contributed by 
Vinayakumar B. (#1801)
52b360a is described below

commit 52b360a92865d2c7cbd113a82b45c6b5a191ce24
Author: Vinayakumar B 
AuthorDate: Mon Jan 13 10:40:29 2020 +0530

HADOOP-16797. Add Dockerfile for ARM builds. Contributed by Vinayakumar B. 
(#1801)
---
 dev-support/bin/create-release  |  16 ++-
 dev-support/docker/Dockerfile_aarch64   | 235 
 hadoop-hdfs-project/hadoop-hdfs/pom.xml |   6 +
 start-build-env.sh  |  11 +-
 4 files changed, 265 insertions(+), 3 deletions(-)

diff --git a/dev-support/bin/create-release b/dev-support/bin/create-release
index d14c007..f4851d1 100755
--- a/dev-support/bin/create-release
+++ b/dev-support/bin/create-release
@@ -204,6 +204,11 @@ function set_defaults
   DOCKERFILE="${BASEDIR}/dev-support/docker/Dockerfile"
   DOCKERRAN=false
 
+  CPU_ARCH=$(echo "$MACHTYPE" | cut -d- -f1)
+  if [ "$CPU_ARCH" = "aarch64" ]; then
+DOCKERFILE="${BASEDIR}/dev-support/docker/Dockerfile_aarch64"
+  fi
+
   # Extract Java version from ${BASEDIR}/pom.xml
   # doing this outside of maven means we can do this before
   # the docker container comes up...
@@ -249,7 +254,9 @@ function startgpgagent
   eval $("${GPGAGENT}" --daemon \
 --options "${LOGDIR}/gpgagent.conf" \
 --log-file="${LOGDIR}/create-release-gpgagent.log")
-  GPGAGENTPID=$(echo "${GPG_AGENT_INFO}" | cut -f 2 -d:)
+  GPGAGENTPID=$(pgrep "${GPGAGENT}")
+  GPG_AGENT_INFO="$HOME/.gnupg/S.gpg-agent:$GPGAGENTPID:1"
+  export GPG_AGENT_INFO
 fi
 
 if [[ -n "${GPG_AGENT_INFO}" ]]; then
@@ -499,7 +506,12 @@ function dockermode
 
 # we always force build with the OpenJDK JDK
 # but with the correct version
-echo "ENV JAVA_HOME /usr/lib/jvm/java-${JVM_VERSION}-openjdk-amd64"
+if [ "$CPU_ARCH" = "aarch64" ]; then
+  echo "ENV JAVA_HOME /usr/lib/jvm/java-${JVM_VERSION}-openjdk-arm64"
+else
+  echo "ENV JAVA_HOME /usr/lib/jvm/java-${JVM_VERSION}-openjdk-amd64"
+fi
+
 echo "USER ${user_name}"
 printf "\n\n"
   ) | docker build -t "${imgname}" -
diff --git a/dev-support/docker/Dockerfile_aarch64 
b/dev-support/docker/Dockerfile_aarch64
new file mode 100644
index 000..8d3c3ad
--- /dev/null
+++ b/dev-support/docker/Dockerfile_aarch64
@@ -0,0 +1,235 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Dockerfile for installing the necessary dependencies for building Hadoop.
+# See BUILDING.txt.
+
+FROM ubuntu:xenial
+
+WORKDIR /root
+
+SHELL ["/bin/bash", "-o", "pipefail", "-c"]
+
+#
+# Disable suggests/recommends
+#
+RUN echo APT::Install-Recommends "0"\; > /etc/apt/apt.conf.d/10disableextras
+RUN echo APT::Install-Suggests "0"\; >>  /etc/apt/apt.conf.d/10disableextras
+
+ENV DEBIAN_FRONTEND noninteractive
+ENV DEBCONF_TERSE true
+
+##
+# Install common dependencies from packages. Versions here are either
+# sufficient or irrelevant.
+#
+# WARNING: DO NOT PUT JAVA APPS HERE! Otherwise they will install default
+# Ubuntu Java.  See Java section below!
+##
+# hadolint ignore=DL3008
+RUN apt-get -q update \
+&& apt-get -q install -y --no-install-recommends \
+apt-utils \
+build-essential \
+bzip2 \
+clang \
+curl \
+doxygen \
+fuse \
+g++ \
+gcc \
+git \
+gnupg-agent \
+libbz2-dev \
+libcurl4-openssl-dev \
+libfuse-dev \
+libprotobuf-dev \
+libprotoc-dev \
+libsasl2-dev \
+libsnappy-dev \
+libssl-dev \
+libtool \
+libzstd1-dev \
+locales \
+make \
+pinentry-curses \
+pkg-config \
+python \
+python2.7 \
+python-pip \
+python-pkg-resources \
+python-setuptools \
+python-wheel \
+rsync \
+software-properties-common \
+snappy \
+sudo \
+va

[hadoop-thirdparty] branch trunk updated: HADOOP-16595. [pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf. Contributed by Vinayakumar B. (#1)

2020-01-12 Thread vinayakumarb
This is an automated email from the ASF dual-hosted git repository.

vinayakumarb pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop-thirdparty.git


The following commit(s) were added to refs/heads/trunk by this push:
 new fd78dcf  HADOOP-16595. [pb-upgrade] Create hadoop-thirdparty artifact 
to have shaded protobuf. Contributed by Vinayakumar B. (#1)
fd78dcf is described below

commit fd78dcf990adde4d09dc1c9dfbf46a83f710027b
Author: Vinayakumar B 
AuthorDate: Mon Jan 13 10:56:24 2020 +0530

HADOOP-16595. [pb-upgrade] Create hadoop-thirdparty artifact to have shaded 
protobuf. Contributed by Vinayakumar B. (#1)
---
 .github/pull_request_template.md   |   6 +
 .gitignore |   9 +
 LICENSE-binary | 241 +++
 LICENSE.txt| 224 ++
 NOTICE-binary  | 780 +
 NOTICE.txt |  34 +
 dev-support/bin/create-release | 641 +
 dev-support/bin/releasedocmaker|  18 +
 dev-support/bin/yetus-wrapper  | 188 +
 dev-support/docker/Dockerfile  | 219 ++
 dev-support/docker/hadoop_env_checks.sh| 117 
 hadoop-shaded-protobuf_3_7/pom.xml | 115 +++
 licenses-binary/LICENSE-protobuf.txt   |  32 +
 pom.xml| 438 
 .../resources/assemblies/hadoop-thirdparty-src.xml |  62 ++
 src/site/markdown/index.md.vm  |  45 ++
 src/site/resources/css/site.css|  30 +
 src/site/site.xml  |  59 ++
 18 files changed, 3258 insertions(+)

diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md
new file mode 100644
index 000..2b5014b
--- /dev/null
+++ b/.github/pull_request_template.md
@@ -0,0 +1,6 @@
+## NOTICE
+
+Please create an issue in ASF JIRA before opening a pull request,
+and you need to set the title of the pull request which starts with
+the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
+For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 000..ed49e7c
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,9 @@
+.idea
+**/target/*
+*.patch
+*.iml
+.project
+.classpath
+.settings
+patchprocess
+**/dependency-reduced-pom.xml
diff --git a/LICENSE-binary b/LICENSE-binary
new file mode 100644
index 000..6c668ef
--- /dev/null
+++ b/LICENSE-binary
@@ -0,0 +1,241 @@
+
+ Apache License
+   Version 2.0, January 2004
+http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+  "License" shall mean the terms and conditions for use, reproduction,
+  and distribution as defined by Sections 1 through 9 of this document.
+
+  "Licensor" shall mean the copyright owner or entity authorized by
+  the copyright owner that is granting the License.
+
+  "Legal Entity" shall mean the union of the acting entity and all
+  other entities that control, are controlled by, or are under common
+  control with that entity. For the purposes of this definition,
+  "control" means (i) the power, direct or indirect, to cause the
+  direction or management of such entity, whether by contract or
+  otherwise, or (ii) ownership of fifty percent (50%) or more of the
+  outstanding shares, or (iii) beneficial ownership of such entity.
+
+  "You" (or "Your") shall mean an individual or Legal Entity
+  exercising permissions granted by this License.
+
+  "Source" form shall mean the preferred form for making modifications,
+  including but not limited to software source code, documentation
+  source, and configuration files.
+
+  "Object" form shall mean any form resulting from mechanical
+  transformation or translation of a Source form, including but
+  not limited to compiled object code, generated documentation,
+  and conversions to other media types.
+
+  "Work" shall mean the work of authorship, whether in Source or
+  Object form, made available under the License, as indicated by a
+  copyright notice that is included in or attached to the work
+  (an example is provided in the Appendix below).
+
+  "Derivative Works" shall mean any work, whether in Source or Object
+  form, that is based on (or derived from) the Work and for which the
+  editorial revisions, annotations, elaborations, or other modifications
+  represent, as a whole, an original work of authorship. For the purposes
+  of this License, Derivative Works sh