[jira] [Resolved] (HADOOP-16999) ABFS: Reuse DSAS fetched in ABFS Input and Output stream

2020-07-12 Thread Sneha Vijayarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Vijayarajan resolved HADOOP-16999.

Release Note: Cached implemented as part of HADOOP-16916.
  Resolution: Duplicate

> ABFS: Reuse DSAS fetched in ABFS Input and Output stream
> 
>
> Key: HADOOP-16999
> URL: https://issues.apache.org/jira/browse/HADOOP-16999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.1
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>
> This Jira will track the update where ABFS input and output streams can 
> re-use D-SAS token fetched. If the SAS is within 1 minute of expiry, ABFS 
> will request a new SAS.  When the stream is closed the SAS will be released. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17093) ABFS: GetAccessToken unrecoverable failures are being retried

2020-07-12 Thread Sneha Vijayarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Vijayarajan resolved HADOOP-17093.

Release Note: To be fixed by HADOOP-17092
  Resolution: Duplicate

> ABFS: GetAccessToken unrecoverable failures are being retried
> -
>
> Key: HADOOP-17093
> URL: https://issues.apache.org/jira/browse/HADOOP-17093
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
> Fix For: 3.4.0
>
>
> When there is an invalid config set, call to fetch token fails with exception:
> throw new UnexpectedResponseException(httpResponseCode,
>  requestId,
>  operation
>  + " Unexpected response."
>  + " Check configuration, URLs and proxy settings."
>  + " proxies=" + proxies,
>  authEndpoint,
>  responseContentType,
>  responseBody);
>  }
> Issue here is that UnexpectedResponseException is not recognized as 
> irrecoverable state and ends up being retried. This needs to be fixed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16704) ABFS: Add additional tests for CustomTokenProvider

2020-07-12 Thread Sneha Vijayarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Vijayarajan resolved HADOOP-16704.

Release Note: Already addressed as part of other commits that improved 
handling of exponential retry and custom token provider.
  Resolution: Abandoned

> ABFS: Add additional tests for CustomTokenProvider
> --
>
> Key: HADOOP-16704
> URL: https://issues.apache.org/jira/browse/HADOOP-16704
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>
> ExponentialRetryPolicy will retry requests 30 times which is the default 
> retry count in code whe HttpRequests fail. 
> If the client too has re-try handling, the process seems hanged to the client 
> App due to the high number of retries. This Jira aims to provide a config 
> control for retry count.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16752) ABFS: test failure testLastModifiedTime()

2020-07-12 Thread Sneha Vijayarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Vijayarajan resolved HADOOP-16752.

Release Note: Was caused by a transient error on the backend.
  Resolution: Cannot Reproduce

> ABFS: test failure testLastModifiedTime()
> -
>
> Key: HADOOP-16752
> URL: https://issues.apache.org/jira/browse/HADOOP-16752
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Da Zhou
>Assignee: Sneha Vijayarajan
>Priority: Major
>
> java.lang.AssertionError: lastModifiedTime should be after minCreateStartTime
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemFileStatus.testLastModifiedTime(ITestAzureBlobFileSystemFileStatus.java:138)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16892) ABFS: Backport HADOOP-16730

2020-07-12 Thread Sneha Vijayarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Vijayarajan resolved HADOOP-16892.

Release Note: Branch-2 has been deprecated. And currently no plan exists to 
backport this to 2.x branch.
  Resolution: Won't Fix

> ABFS: Backport HADOOP-16730
> ---
>
> Key: HADOOP-16892
> URL: https://issues.apache.org/jira/browse/HADOOP-16892
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>
> Backport commit to support SAS Azure Storage access through ABFS Driver 
> tracked under HADOOP-16730 to branch-2.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17075) Improvement to the AccessControlException thrown by Azure abfs driver

2020-07-12 Thread Sneha Vijayarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Vijayarajan resolved HADOOP-17075.

Release Note: The error message was from the custom implementation of SAS 
token provider. No change needed from ABFS driver.
  Resolution: Not A Bug

> Improvement to the AccessControlException thrown by Azure abfs driver
> -
>
> Key: HADOOP-17075
> URL: https://issues.apache.org/jira/browse/HADOOP-17075
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Ramesh Mani
>Assignee: Sneha Vijayarajan
>Priority: Major
>
> Currently when an AccessControlException happens in Abfs driver call it 
> prints the entire stack trace. To be consistent with the HDFS way of showing 
> the Permission denied, could we modify this in ABFSClient?
>  
> e.g:
> >$ hdfs dfs -ls /hbase/mobdir
> ls: Permission denied: user=user1, access=READ_EXECUTE, 
> inode="/hbase/mobdir":hbase:hbase:drwx--



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-07-12 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/201/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   module:hadoop-yarn-project 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 

findbugs :

   module:hadoop-cloud-storage-project 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 

findbugs :

   

RE: [VOTE] Release Apache Hadoop 3.3.0 - RC0

2020-07-12 Thread Bilwa S T
+1(non-binding)

1. Deployed 3 node cluster
2. Browsed through Web UI (RM, NM)
3. Executed Jobs (pi, wordcount, TeraGen, TeraSort)
4. Verified basic yarn commands

Thanks,
Bilwa

-Original Message-
From: Surendra Singh Lilhore [mailto:surendralilh...@gmail.com] 
Sent: 12 July 2020 18:32
To: hemanth boyina 
Cc: Iñigo Goiri ; Vinayakumar B ; 
Brahma Reddy Battula ; mapreduce-dev 
; Hdfs-dev ; 
Hadoop Common ; yarn-dev 

Subject: Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0

+1(binding)

Deployed HDFS and Yarn Cluster
> Verified basic shell commands
> Ran some jobs
> Verified UI

-Surendra

On Sat, Jul 11, 2020 at 9:41 PM hemanth boyina 
wrote:

> +1(non-binding)
> Deployed Cluster with Namenodes and Router *)verified shell commands 
> *)Executed various jobs *)Browsed UI's
>
>
> Thanks,
> HemanthBoyina
>
>
> On Sat, 11 Jul 2020, 00:05 Iñigo Goiri,  wrote:
>
> > +1 (Binding)
> >
> > Deployed a cluster on Azure VMs with:
> > * 3 VMs with HDFS Namenodes and Routers
> > * 2 VMs with YARN Resource Managers
> > * 5 VMs with HDFS Datanodes and Node Managers
> >
> > Tests:
> > * Executed Tergagen+Terasort+Teravalidate.
> > * Executed wordcount.
> > * Browsed through the Web UI.
> >
> >
> >
> > On Fri, Jul 10, 2020 at 1:06 AM Vinayakumar B 
> > 
> > wrote:
> >
> > > +1 (Binding)
> > >
> > > -Verified all checksums and Signatures.
> > > -Verified site, Release notes and Change logs
> > >   + May be changelog and release notes could be grouped based on 
> > > the project at second level for better look (this needs to be 
> > > supported
> from
> > > yetus)
> > > -Tested in x86 local 3-node docker cluster.
> > >   + Built from source with OpenJdk 8 and Ubuntu 18.04
> > >   + Deployed 3 node docker cluster
> > >   + Ran various Jobs (wordcount, Terasort, Pi, etc)
> > >
> > > No Issues reported.
> > >
> > > -Vinay
> > >
> > > On Fri, Jul 10, 2020 at 1:19 PM Sheng Liu 
> > wrote:
> > >
> > > > +1 (non-binding)
> > > >
> > > > - checkout the "3.3.0-aarch64-RC0" binaries packages
> > > >
> > > > - started a clusters with 3 nodes VMs of Ubuntu 18.04 
> > > > ARM/aarch64, openjdk-11-jdk
> > > >
> > > > - checked some web UIs (NN, DN, RM, NM)
> > > >
> > > > - Executed a wordcount, TeraGen, TeraSort and TeraValidate
> > > >
> > > > - Executed a TestDFSIO job
> > > >
> > > > - Executed a Pi job
> > > >
> > > > BR,
> > > > Liusheng
> > > >
> > > > Zhenyu Zheng  于2020年7月10日周五 下午3:45写道:
> > > >
> > > > > +1 (non-binding)
> > > > >
> > > > > - Verified all hashes and checksums
> > > > > - Tested on ARM platform for the following actions:
> > > > >   + Built from source on Ubuntu 18.04, OpenJDK 8
> > > > >   + Deployed a pseudo cluster
> > > > >   + Ran some example jobs(grep, wordcount, pi)
> > > > >   + Ran teragen/terasort/teravalidate
> > > > >   + Ran TestDFSIO job
> > > > >
> > > > > BR,
> > > > >
> > > > > Zhenyu
> > > > >
> > > > > On Fri, Jul 10, 2020 at 2:40 PM Akira Ajisaka 
> > > > >  >
> > > > wrote:
> > > > >
> > > > > > +1 (binding)
> > > > > >
> > > > > > - Verified checksums and signatures.
> > > > > > - Built from the source with CentOS 7 and OpenJDK 8.
> > > > > > - Successfully upgraded HDFS to 3.3.0-RC0 in our development
> > cluster
> > > > > (with
> > > > > > RBF, security, and OpenJDK 11) for end-users. No issues reported.
> > > > > > - The document looks good.
> > > > > > - Deployed pseudo cluster and ran some MapReduce jobs.
> > > > > >
> > > > > > Thanks,
> > > > > > Akira
> > > > > >
> > > > > >
> > > > > > On Tue, Jul 7, 2020 at 7:27 AM Brahma Reddy Battula <
> > > bra...@apache.org
> > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hi folks,
> > > > > > >
> > > > > > > This is the first release candidate for the first release 
> > > > > > > of
> > Apache
> > > > > > > Hadoop 3.3.0
> > > > > > > line.
> > > > > > >
> > > > > > > It contains *1644[1]* fixed jira issues since 3.2.1 which
> > include a
> > > > lot
> > > > > > of
> > > > > > > features and improvements(read the full set of release notes).
> > > > > > >
> > > > > > > Below feature additions are the highlights of the release.
> > > > > > >
> > > > > > > - ARM Support
> > > > > > > - Enhancements and new features on S3a,S3Guard,ABFS
> > > > > > > - Java 11 Runtime support and TLS 1.3.
> > > > > > > - Support Tencent Cloud COS File System.
> > > > > > > - Added security to HDFS Router.
> > > > > > > - Support non-volatile storage class memory(SCM) in HDFS 
> > > > > > > cache
> > > > > directives
> > > > > > > - Support Interactive Docker Shell for running Containers.
> > > > > > > - Scheduling of opportunistic containers
> > > > > > > - A pluggable device plugin framework to ease vendor 
> > > > > > > plugin
> > > > development
> > > > > > >
> > > > > > > *The RC0 artifacts are at*:
> > > > > > > http://home.apache.org/~brahma/Hadoop-3.3.0-RC0/
> > > > > > >
> > > > > > > *First release to include ARM binary, Have a check.* *RC 
> > > > > > > tag is *release-3.3.0-RC0.
> > > > > > >
> > > > > > >
> > > > > > > *The maven artifacts 

Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0

2020-07-12 Thread Surendra Singh Lilhore
+1(binding)

Deployed HDFS and Yarn Cluster
> Verified basic shell commands
> Ran some jobs
> Verified UI

-Surendra

On Sat, Jul 11, 2020 at 9:41 PM hemanth boyina 
wrote:

> +1(non-binding)
> Deployed Cluster with Namenodes and Router
> *)verified shell commands
> *)Executed various jobs
> *)Browsed UI's
>
>
> Thanks,
> HemanthBoyina
>
>
> On Sat, 11 Jul 2020, 00:05 Iñigo Goiri,  wrote:
>
> > +1 (Binding)
> >
> > Deployed a cluster on Azure VMs with:
> > * 3 VMs with HDFS Namenodes and Routers
> > * 2 VMs with YARN Resource Managers
> > * 5 VMs with HDFS Datanodes and Node Managers
> >
> > Tests:
> > * Executed Tergagen+Terasort+Teravalidate.
> > * Executed wordcount.
> > * Browsed through the Web UI.
> >
> >
> >
> > On Fri, Jul 10, 2020 at 1:06 AM Vinayakumar B 
> > wrote:
> >
> > > +1 (Binding)
> > >
> > > -Verified all checksums and Signatures.
> > > -Verified site, Release notes and Change logs
> > >   + May be changelog and release notes could be grouped based on the
> > > project at second level for better look (this needs to be supported
> from
> > > yetus)
> > > -Tested in x86 local 3-node docker cluster.
> > >   + Built from source with OpenJdk 8 and Ubuntu 18.04
> > >   + Deployed 3 node docker cluster
> > >   + Ran various Jobs (wordcount, Terasort, Pi, etc)
> > >
> > > No Issues reported.
> > >
> > > -Vinay
> > >
> > > On Fri, Jul 10, 2020 at 1:19 PM Sheng Liu 
> > wrote:
> > >
> > > > +1 (non-binding)
> > > >
> > > > - checkout the "3.3.0-aarch64-RC0" binaries packages
> > > >
> > > > - started a clusters with 3 nodes VMs of Ubuntu 18.04 ARM/aarch64,
> > > > openjdk-11-jdk
> > > >
> > > > - checked some web UIs (NN, DN, RM, NM)
> > > >
> > > > - Executed a wordcount, TeraGen, TeraSort and TeraValidate
> > > >
> > > > - Executed a TestDFSIO job
> > > >
> > > > - Executed a Pi job
> > > >
> > > > BR,
> > > > Liusheng
> > > >
> > > > Zhenyu Zheng  于2020年7月10日周五 下午3:45写道:
> > > >
> > > > > +1 (non-binding)
> > > > >
> > > > > - Verified all hashes and checksums
> > > > > - Tested on ARM platform for the following actions:
> > > > >   + Built from source on Ubuntu 18.04, OpenJDK 8
> > > > >   + Deployed a pseudo cluster
> > > > >   + Ran some example jobs(grep, wordcount, pi)
> > > > >   + Ran teragen/terasort/teravalidate
> > > > >   + Ran TestDFSIO job
> > > > >
> > > > > BR,
> > > > >
> > > > > Zhenyu
> > > > >
> > > > > On Fri, Jul 10, 2020 at 2:40 PM Akira Ajisaka  >
> > > > wrote:
> > > > >
> > > > > > +1 (binding)
> > > > > >
> > > > > > - Verified checksums and signatures.
> > > > > > - Built from the source with CentOS 7 and OpenJDK 8.
> > > > > > - Successfully upgraded HDFS to 3.3.0-RC0 in our development
> > cluster
> > > > > (with
> > > > > > RBF, security, and OpenJDK 11) for end-users. No issues reported.
> > > > > > - The document looks good.
> > > > > > - Deployed pseudo cluster and ran some MapReduce jobs.
> > > > > >
> > > > > > Thanks,
> > > > > > Akira
> > > > > >
> > > > > >
> > > > > > On Tue, Jul 7, 2020 at 7:27 AM Brahma Reddy Battula <
> > > bra...@apache.org
> > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hi folks,
> > > > > > >
> > > > > > > This is the first release candidate for the first release of
> > Apache
> > > > > > > Hadoop 3.3.0
> > > > > > > line.
> > > > > > >
> > > > > > > It contains *1644[1]* fixed jira issues since 3.2.1 which
> > include a
> > > > lot
> > > > > > of
> > > > > > > features and improvements(read the full set of release notes).
> > > > > > >
> > > > > > > Below feature additions are the highlights of the release.
> > > > > > >
> > > > > > > - ARM Support
> > > > > > > - Enhancements and new features on S3a,S3Guard,ABFS
> > > > > > > - Java 11 Runtime support and TLS 1.3.
> > > > > > > - Support Tencent Cloud COS File System.
> > > > > > > - Added security to HDFS Router.
> > > > > > > - Support non-volatile storage class memory(SCM) in HDFS cache
> > > > > directives
> > > > > > > - Support Interactive Docker Shell for running Containers.
> > > > > > > - Scheduling of opportunistic containers
> > > > > > > - A pluggable device plugin framework to ease vendor plugin
> > > > development
> > > > > > >
> > > > > > > *The RC0 artifacts are at*:
> > > > > > > http://home.apache.org/~brahma/Hadoop-3.3.0-RC0/
> > > > > > >
> > > > > > > *First release to include ARM binary, Have a check.*
> > > > > > > *RC tag is *release-3.3.0-RC0.
> > > > > > >
> > > > > > >
> > > > > > > *The maven artifacts are hosted here:*
> > > > > > >
> > > > >
> > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1271/
> > > > > > >
> > > > > > > *My public key is available here:*
> > > > > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > > > > > >
> > > > > > > The vote will run for 5 weekdays, until Tuesday, July 13 at
> 3:50
> > AM
> > > > > IST.
> > > > > > >
> > > > > > >
> > > > > > > I have done a few testing with my pseudo cluster. My +1 to
> start.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Regards,

Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-07-12 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/745/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Useless object stored in variable removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:[line 664] 
   
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:[line 741] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:[line 359] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
 is a mutable collection which should be package protected At 
ContainerMetrics.java:which should be package protected At 
ContainerMetrics.java:[line 134] 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 
   
org.apache.hadoop.yarn.state.StateMachineFactory.generateStateGraph(String) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:[line 505] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
   
org.apache.hadoop.yarn.state.StateMachineFactory.generateStateGraph(String) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:[line 505] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
   Useless object stored in variable removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:[line 664] 
   
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:[line 741] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:[line 359] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
 is a mutable collection which should be package protected At 
ContainerMetrics.java:which should be package protected At 
ContainerMetrics.java:[line 134] 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in