[jira] [Reopened] (HADOOP-11219) Upgrade to netty 4

2019-01-22 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-11219:

  Assignee: Haohui Mai

Sorry, some modules are still using Netty 3. Reopening.

> Upgrade to netty 4
> --
>
> Key: HADOOP-11219
> URL: https://issues.apache.org/jira/browse/HADOOP-11219
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Major
>
> This is an umbrella jira to track the effort of upgrading to Netty 4.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11219) Upgrade to netty 4

2019-01-22 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-11219.

Resolution: Duplicate
  Assignee: (was: Haohui Mai)

Closing this as duplicate.

> Upgrade to netty 4
> --
>
> Key: HADOOP-11219
> URL: https://issues.apache.org/jira/browse/HADOOP-11219
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Priority: Major
>
> This is an umbrella jira to track the effort of upgrading to Netty 4.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16065) -Ddynamodb should be -Ddynamo in AWS SDK testing document

2019-01-22 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-16065:
--

 Summary: -Ddynamodb should be -Ddynamo in AWS SDK testing document
 Key: HADOOP-16065
 URL: https://issues.apache.org/jira/browse/HADOOP-16065
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Akira Ajisaka


{{-Ddynamodb}} should be {{-Ddynamo}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Cannot kill Pre-Commit jenkins builds

2019-01-22 Thread Arun Suresh
Hmmm.. as per this (https://wiki.apache.org/general/Jenkins), looks like my
id needs to be added to the hudson-jobadmin group to affect any changes on
jenkins.
But wondering why it was revoked in the first place.

On Tue, Jan 22, 2019 at 4:21 PM Vinod Kumar Vavilapalli 
wrote:

> Minus private.
>
> Which specific job you are looking? I looked at
> https://builds.apache.org/job/PreCommit-YARN-Build/ but can't seem to
> find any user specific auth.
>
> +Vinod
>
> On Jan 22, 2019, at 10:00 AM, Arun Suresh  wrote:
>
> Hey Vinod.. Ping!
>
> Cheers
> -Arun
>
> On Fri, Jan 18, 2019 at 9:46 AM Arun Suresh  wrote:
>
> Hi Vinod
>
> Can you please help with this:
> https://issues.apache.org/jira/browse/INFRA-17673 ?
>
> Cheers
> -Arun
>
> On Wed, Jan 16, 2019, 12:53 PM Arun Suresh 
> Hi
>
> We are currently trying to get the branch-2 pre-commit builds working.
> I used to be able to kill Pre-Commit jenkins jobs, but looks like I am
> not allowed to anymore. Has anything changed recently w.r.t permissions etc
> ?
>
> Cheers
> -Arun
>
>
>
>


Re: Cannot kill Pre-Commit jenkins builds

2019-01-22 Thread Arun Suresh
Hey Vinod.. Ping!

Cheers
-Arun

On Fri, Jan 18, 2019 at 9:46 AM Arun Suresh  wrote:

> Hi Vinod
>
> Can you please help with this:
> https://issues.apache.org/jira/browse/INFRA-17673 ?
>
> Cheers
> -Arun
>
> On Wed, Jan 16, 2019, 12:53 PM Arun Suresh 
>> Hi
>>
>> We are currently trying to get the branch-2 pre-commit builds working.
>> I used to be able to kill Pre-Commit jenkins jobs, but looks like I am
>> not allowed to anymore. Has anything changed recently w.r.t permissions etc
>> ?
>>
>> Cheers
>> -Arun
>>
>


Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-22 Thread Brian Demers
Anyone else getting timeout errors with the MIT keypool? The
ubuntu keypool seems ok

On Tue, Jan 22, 2019 at 1:28 AM Wangda Tan  wrote:

> It seems there's no useful information from the log :(. Maybe I should
> change my key and try again. In the meantime, Sunil will help me to create
> release and get 3.1.2 out.
>
> Thanks everybody for helping with this, really appreciate it!
>
> Best,
> Wangda
>
> On Mon, Jan 21, 2019 at 9:55 PM Chris Lambertus  wrote:
>
>> 2019-01-22 05:40:41 INFO  [99598137-805273] -
>> com.sonatype.nexus.staging.internal.DefaultStagingManager - Dropping
>> staging repositories [orgapachehadoop-1201]
>> 2019-01-22 05:40:42 INFO  [ool-1-thread-14] -
>> com.sonatype.nexus.staging.internal.task.StagingBackgroundTask - STARTED
>> Dropping staging repositories: [orgapachehadoop-1201]
>> 2019-01-22 05:40:42 INFO  [ool-1-thread-14] -
>> org.sonatype.nexus.configuration.ModelUtils - Saving model
>> /x1/nexus-work/conf/staging.xml
>> 2019-01-22 05:40:42 INFO  [ool-1-thread-14] -
>> com.sonatype.nexus.staging.internal.task.RepositoryDropTask - Dropping:
>> DropItem{id=orgapachehadoop-1201, state=open, group=false}
>> 2019-01-22 05:40:42 INFO  [ool-1-thread-14] -
>> org.sonatype.nexus.proxy.registry.DefaultRepositoryRegistry - Removed
>> repository "orgapachehadoop-1201 (staging: open)"
>> [id=orgapachehadoop-1201][contentClass=Maven2][mainFacet=org.sonatype.nexus.proxy.maven.MavenHostedRepository]
>> 2019-01-22 05:40:42 INFO  [ool-1-thread-14] -
>> org.sonatype.nexus.configuration.application.DefaultNexusConfiguration -
>> Applying Nexus Configuration due to changes in [Repository Grouping
>> Configuration] made by *TASK...
>> 2019-01-22 05:40:42 INFO  [ool-1-thread-14] -
>> org.sonatype.nexus.configuration.ModelUtils - Saving model
>> /x1/nexus-work/conf/staging.xml
>> 2019-01-22 05:40:42 INFO  [ool-1-thread-14] -
>> org.sonatype.nexus.configuration.ModelUtils - Saving model
>> /x1/nexus-work/conf/staging.xml
>> 2019-01-22 05:40:42 INFO  [ool-1-thread-14] -
>> org.sonatype.nexus.configuration.ModelUtils - Saving model
>> /x1/nexus-work/conf/staging.xml
>> 2019-01-22 05:40:42 INFO  [ool-1-thread-14] -
>> com.sonatype.nexus.staging.internal.task.StagingBackgroundTask - FINISHED
>> Dropping staging repositories: [orgapachehadoop-1201]
>> 2019-01-22 05:40:42 INFO  [ool-1-thread-14] -
>> org.sonatype.nexus.configuration.application.DefaultNexusConfiguration -
>> Applying Nexus Configuration due to changes in [Scheduled Tasks] made by
>> *TASK...
>> 2019-01-22 05:40:42 INFO  [pool-1-thread-3] -
>> org.sonatype.nexus.tasks.DeleteRepositoryFoldersTask - Scheduled task
>> (DeleteRepositoryFoldersTask) started :: Deleting folders with repository
>> ID: orgapachehadoop-1201
>> 2019-01-22 05:40:42 INFO  [pool-1-thread-3] -
>> org.sonatype.nexus.tasks.DeleteRepositoryFoldersTask - Scheduled task
>> (DeleteRepositoryFoldersTask) finished :: Deleting folders with repository
>> ID: orgapachehadoop-1201 (started 2019-01-22T05:40:42+00:00, runtime
>> 0:00:00.023)
>> 2019-01-22 05:40:42 INFO  [pool-1-thread-3] -
>> org.sonatype.nexus.configuration.application.DefaultNexusConfiguration -
>> Applying Nexus Configuration due to changes in [Scheduled Tasks] made by
>> *TASK...
>> 2019-01-22 05:40:50 INFO  [99598137-805254] -
>> com.sonatype.nexus.staging.internal.DefaultStagingManager - Creating
>> staging repository under profile id = '6a441994c87797' for deploy
>> RestDeployRequest
>> [path=/org/apache/hadoop/hadoop-main/3.1.2/hadoop-main-3.1.2.pom,
>> repositoryType=maven2, action=create, acceptMode=DEPLOY] (explicit=false)
>> 2019-01-22 05:40:50 INFO  [99598137-805254] -
>> org.sonatype.nexus.configuration.ModelUtils - Saving model
>> /x1/nexus-work/conf/staging.xml
>> 2019-01-22 05:40:50 INFO  [99598137-805254] -
>> org.sonatype.nexus.configuration.ModelUtils - Saving model
>> /x1/nexus-work/conf/staging.xml
>> 2019-01-22 05:40:50 INFO  [99598137-805254] -
>> org.sonatype.nexus.proxy.maven.routing.internal.ManagerImpl - Initializing
>> non-existing prefix file of newly added "orgapachehadoop-1202 (staging:
>> open)" [id=orgapachehadoop-1202]
>> 2019-01-22 05:40:50 INFO  [ar-7-thread-5  ] -
>> org.sonatype.nexus.proxy.maven.routing.internal.ManagerImpl - Updated and
>> published prefix file of "orgapachehadoop-1202 (staging: open)"
>> [id=orgapachehadoop-1202]
>> 2019-01-22 05:40:50 INFO  [99598137-805254] -
>> org.sonatype.nexus.proxy.registry.DefaultRepositoryRegistry - Added
>> repository "orgapachehadoop-1202 (staging: open)"
>> [id=orgapachehadoop-1202][contentClass=Maven2][mainFacet=org.sonatype.nexus.proxy.maven.MavenHostedRepository]
>> 2019-01-22 05:40:50 INFO  [99598137-805254] -
>> org.sonatype.nexus.configuration.application.DefaultNexusConfiguration -
>> Applying Nexus Configuration made by wangda...
>> 2019-01-22 05:40:50 INFO  [99598137-805254] -
>> org.sonatype.nexus.configuration.application.DefaultNexusConfiguration -
>> Applying Nexus Configuration due to changes in [orgapachehadoop-1202
>> 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-01-22 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/

[Jan 21, 2019 1:54:58 AM] (tasanuma) HADOOP-16046. [JDK 11] Correct the 
compiler exclusion of
[Jan 21, 2019 8:54:14 AM] (wwei) YARN-9204. RM fails to start if absolute 
resource is specified for
[Jan 21, 2019 3:54:51 PM] (sunilg) Make 3.2.0 aware to other branches
[Jan 21, 2019 5:11:26 PM] (sunilg) Make 3.2.0 aware to other branches - jdiff
[Jan 22, 2019 1:19:05 AM] (aajisaka) HADOOP-15787. [JDK11] 
TestIPC.testRTEDuringConnectionSetup fails.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/diff-patch-hadolint.txt
  [8.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/diff-javadoc-javadoc-root.txt
  [752K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [328K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [84K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1024/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [8.0K]
   

[jira] [Created] (HADOOP-16064) Load configuration values from external sources

2019-01-22 Thread Elek, Marton (JIRA)
Elek, Marton created HADOOP-16064:
-

 Summary: Load configuration values from external sources
 Key: HADOOP-16064
 URL: https://issues.apache.org/jira/browse/HADOOP-16064
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Elek, Marton


This is a proposal to improve the Configuration.java to load configuration from 
external sources (kubernetes config map, external http reqeust, any cluster 
manager like ambari, etc.)

I will attach a patch to illustrate the proposed solution, but please comment 
the concept first, the patch is just poc and not fully implemented.

*Goals:*
 * **Load the configuration files (core-site.xml/hdfs-site.xml/...) from 
external locations instead of the classpath (classpath remains the default)
 * Make the configuration loading extensible
 * Make it in an backward-compatible way with minimal change in the existing 
Configuration.java

*Use-cases:*

 1.) load configuration from the namenode ([http://namenode:9878/conf]). With 
this approach only the namenode should be configured, other components require 
only the url of the namenode

 2.) Read configuration directly from kubernetes config-map (or mesos)

 3.) Read configuration from any external cluster management (such as Apache 
Ambari or any equivalent)

 4.) as of now in the hadoop docker images we transform environment variables 
(such as HDFS-SITE.XML_fs.defaultFs) to configuration xml files with the help 
of a python script. With the proposed implementation it would be possible to 
read the configuration directly from the system environment variables.

*Problem:*

The existing Configuration.java can read configuration from multiple sources. 
But most of the time it's used to load predefined config names ("core-site.xml" 
and "hdfs-site.xml") without configuration location. In this case the files 
will be loaded from the classpath.

I propose to add additional option to define the default location of 
core-site.xml and hdfs-site.xml (any configuration which is defined by string 
name) to use external sources in the classpath.

The configuration loading requires implementation + configuration (where are 
the external configs). We can't use regular configuration to configure the 
config loader (chicken/egg).

I propose to use a new environment variable HADOOP_CONF_SOURCE

The environment variable could contain a URL, where the schema of the url can 
define the config source and all the other parts can configure the access to 
the resource.

Examples:

HADOOP_CONF_SOURCE=hadoop-[http://namenode:9878/conf]

HADOOP_CONF_SOURCE=env://prefix

HADOOP_CONF_SOURCE=k8s://config-map-name

The ConfigurationSource interface can be as easy as:
{code:java}
/**
 * Interface to load hadoop configuration from custom location.
 */
public interface ConfigurationSource {

  /**
   * Method will be called one with the defined configuration url.
   *
   * @param uri
   */
  void initialize(URI uri) throws IOException;

  /**
   * Method will be called to load a specific configuration resource.
   *
   * @param name of the configuration resource (eg. hdfs-site.xml)
   * @return List of loaded configuraiton key and values.
   */
  List readConfiguration(String name);

}{code}
We can choose the right implementation based the schema of the uri and with 
Java Service Provider Interface mechanism 
(META-INF/services/org.apache.hadoop.conf.ConfigurationSource)

It could be with minimal modification in the Configuration.java (see the 
attached patch as an example)

 The patch contains two example implementation:

*hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/location/Env.java*

This can load configuration from environment variables based on a naming 
convention (eg. HDFS-SITE.XML_hdfs.dfs.key=value)

*hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/location/HadoopWeb.java*

 This implementation can load the configuration from a /conf servlet of any 
Hadoop components.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16063) Docker based pseudo-cluster definitions and test scripts for Hdfs/Yarn

2019-01-22 Thread Elek, Marton (JIRA)
Elek, Marton created HADOOP-16063:
-

 Summary: Docker based pseudo-cluster definitions and test scripts 
for Hdfs/Yarn
 Key: HADOOP-16063
 URL: https://issues.apache.org/jira/browse/HADOOP-16063
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Elek, Marton


During the recent releases of Apache Hadoop Ozone we had multiple experiments 
using docker/docker-compose to support the development of ozone.

As of now the hadoop-ozone distribution contains two directories in additional 
the regular hadoop directories (bin, share/lib, etc
h3. compose

The ./compose directory of the distribution contains different type of 
pseudo-cluster definitions. To start an ozone cluster is as easy as "cd 
compose/ozone && docker-compose up-d"

The clusters also could be scaled up and down (docker-compose scale datanode=3)

There are multiple cluster definitions for different use cases (for example 
ozone+s3 or hdfs+ozone).

The docker-compose files are based on apache/hadoop-runner image which is an 
"empty" image. It doesnt' contain any hadoop distribution. Instead the current 
hadoop is used (the ../.. is mapped as a volume at /opt/hadoop)

With this approach it's very easy to 1) start a cluster from the distribution 
2) test any patch from the dev tree, as after any build a new cluster can be 
started easily (with multiple nodes and datanodes)
h3. smoketest

We also started to use a simple robotframework based test suite. (see 
./smoketest directory). It's a high level test definition very similar to the 
smoketests which are executed manually by the contributors during a release 
vote.

But it's a formal definition to start cluster from different docker-compose 
definitions and execute simple shell scripts (and compare the output).

 

I believe that both approaches helped a lot during the development of ozone and 
I propose to do the same improvements on the main hadoop distribution.

I propose to provide docker-compose based example cluster definitions for 
yarn/hdfs and for different use cases (simple hdfs, router based federation, 
etc.)

It can help to understand the different configuration and try out new features 
with predefined config set.

Long term we can also add robottests to help the release votes (basic 
wordcount/mr tests could be scripted)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[ANNOUNCE] Apache Hadoop 3.2.0 release

2019-01-22 Thread Sunil G
Greetings all,

It gives me great pleasure to announce that the Apache Hadoop community has
voted to
release Apache Hadoop 3.2.0.

Apache Hadoop 3.2.0 is the first release of Apache Hadoop 3.2 line for the
year 2019,
which includes 1092 fixes since previous Hadoop 3.1.0 release.
Of these fixes:
   - 230 in Hadoop Common
   - 344 in HDFS
   - 484 in YARN
   - 34 in MapReduce

Apache Hadoop 3.2.0 contains a number of significant features and
enhancements.
A few of them are noted as below.

- ABFS Filesystem connector : supports the latest Azure Datalake Gen2
Storage.
- Enhanced S3A connector : including better resilience to throttled AWS S3
and
  DynamoDB IO.
- Node Attributes Support in YARN : helps to tag multiple labels on the
nodes based
  on its attributes and supports placing the containers based on expression
of these labels.
- Storage Policy Satisfier : supports HDFS (Hadoop Distributed File System)
applications to
  move the blocks between storage types as they set the storage policies on
files/directories.
- Hadoop Submarine : enables data engineers to easily develop, train and
deploy deep learning
  models (in TensorFlow) on very same Hadoop YARN cluster.
- C++ HDFS client : helps to do async IO to HDFS which helps downstream
projects such as
  Apache ORC;
- Upgrades for long running services : supports in-place seamless upgrades
of long running
  containers via YARN Native Service API and CLI.

* For major changes included in Hadoop 3.2 line, please refer Hadoop
3.2.0 main page [1].
* For more details about fixes in 3.2.0 release, please read the
CHANGELOG [2] and RELEASENOTES [3].

The release news is posted on the Hadoop website too, you can go to
the downloads section directly [4].

Many thanks to everyone who contributed to the release, and everyone in the
Apache Hadoop community! This release is a direct result of your great
contributions.
Many thanks to Wangda Tan, Vinod Kumar Vavilapalli and Marton Elek who
helped in
this release process.

[1] https://hadoop.apache.org/docs/r3.2.0/
[2]
https://hadoop.apache.org/docs/r3.2.0/hadoop-project-dist/hadoop-common/release/3.2.0/CHANGELOG.3.2.0.html
[3]
https://hadoop.apache.org/docs/r3.2.0/hadoop-project-dist/hadoop-common/release/3.2.0/RELEASENOTES.3.2.0.html
[4] https://hadoop.apache.org/releases.html

Many Thanks,
Sunil Govindan