[VOTE] Apache Hadoop Ozone 0.5.0-beta RC2

2020-03-15 Thread Dinesh Chitlangia
Hi Folks,

We have put together RC2 for Apache Hadoop Ozone 0.5.0-beta.

The RC artifacts are at:
https://home.apache.org/~dineshc/ozone-0.5.0-rc2/

The public key used for signing the artifacts can be found at:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

The maven artifacts are staged at:
https://repository.apache.org/content/repositories/orgapachehadoop-1262

The RC tag in git is at:
https://github.com/apache/hadoop-ozone/tree/ozone-0.5.0-beta-RC2

This release contains 800+ fixes/improvements [1].
Thanks to everyone who put in the effort to make this happen.

*The vote will run for 7 days, ending on March 22nd 2020 at 11:59 pm PST.*

Note: This release is beta quality, it’s not recommended to use in
production but we believe that it’s stable enough to try out the feature
set and collect feedback.


[1] https://s.apache.org/ozone-0.5.0-fixed-issues

Thanks,
Dinesh Chitlangia


Re: [VOTE] Apache Hadoop Ozone 0.5.0-beta RC1

2020-03-15 Thread Dinesh Chitlangia
Thanks Shashikant for flagging the blocker issue.

RC1 now stands cancelled.

I will initiate RC2 shortly.



On Fri, Mar 13, 2020 at 12:08 PM Arpit Agarwal
 wrote:

> HDDS-3116 is now fixed in the ozone-0.5.0 branch.
>
> Folks - any more potential blockers before Dinesh spins RC2? I don’t see
> anything in Jira at the moment:
>
>
> https://issues.apache.org/jira/issues/?jql=project%20in%20(%22HDDS%22)%20and%20%22Target%20Version%2Fs%22%20in%20(0.5.0)%20and%20resolution%20in%20(Unresolved)%20and%20priority%20in%20(Blocker)
> <
> https://issues.apache.org/jira/issues/?jql=project%20in%20(%22HDDS%22)%20and%20%22Target%20Version/s%22%20in%20(0.5.0)%20and%20resolution%20in%20(Unresolved)%20and%20priority%20in%20(Blocker)
> >
>
> Thanks,
> Arpit
>
>
> > On Mar 9, 2020, at 6:15 PM, Shashikant Banerjee 
> > 
> wrote:
> >
> > I think https://issues.apache.org/jira/browse/HDDS-3116 <
> https://issues.apache.org/jira/browse/HDDS-3116> is a blocker for
> > the release. Because of this, datanodes fail to communicate with SCM and
> > marked dead and don't seem to recover.
> > This has been observed in multiple test setups.
> >
> > Thanks
> > Shashi
> >
> > On Mon, Mar 9, 2020 at 9:20 PM Attila Doroszlai  >
> > wrote:
> >
> >> +1
> >>
> >> * Verified GPG signature and SHA512 checksum
> >> * Compiled sources
> >> * Ran ozone smoke test against both binary and locally compiled versions
> >>
> >> Thanks Dinesh for RC1.
> >>
> >> -Attila
> >>
> >> On Sun, Mar 8, 2020 at 2:34 AM Arpit Agarwal
> >>  wrote:
> >>>
> >>> +1 (binding)
> >>> Verified mds, sha512
> >>> Verified signatures
> >>> Built from source
> >>> Deployed to 3 node cluster
> >>> Tried a few ozone shell and filesystem commands
> >>> Ran freon load generator
> >>> Thanks Dinesh for putting the RC1 together.
> >>>
> >>>
> >>>
>  On Mar 6, 2020, at 4:46 PM, Dinesh Chitlangia 
> >> wrote:
> 
>  Hi Folks,
> 
>  We have put together RC1 for Apache Hadoop Ozone 0.5.0-beta.
> 
>  The RC artifacts are at:
>  https://home.apache.org/~dineshc/ozone-0.5.0-rc1/
> 
>  The public key used for signing the artifacts can be found at:
>  https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> 
>  The maven artifacts are staged at:
> 
> >> https://repository.apache.org/content/repositories/orgapachehadoop-1260
> 
>  The RC tag in git is at:
>  https://github.com/apache/hadoop-ozone/tree/ozone-0.5.0-beta-RC1
> 
>  This release contains 800+ fixes/improvements [1].
>  Thanks to everyone who put in the effort to make this happen.
> 
>  *The vote will run for 7 days, ending on March 13th 2020 at 11:59 pm
> >> PST.*
> 
>  Note: This release is beta quality, it’s not recommended to use in
>  production but we believe that it’s stable enough to try out the
> >> feature
>  set and collect feedback.
> 
> 
>  [1] https://s.apache.org/ozone-0.5.0-fixed-issues
> 
>  Thanks,
>  Dinesh Chitlangia
> >>>
> >>
> >> -
> >> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org  hdfs-dev-unsubscr...@hadoop.apache.org>
> >> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> 
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-03-15 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1439/

[Mar 14, 2020 1:55:06 AM] (weichiu) HDFS-15113. Missing IBR when NameNode 
restart if open processCommand
[Mar 14, 2020 2:01:23 AM] (weichiu) HDFS-14820. The default 8KB buffer of




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate 
   hadoop.hdfs.server.namenode.TestNamenodeCapacityReport 
   hadoop.hdfs.server.datanode.TestBPOfferService 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.server.namenode.ha.TestEditLogTailer 
   hadoop.hdfs.server.federation.router.TestRouterFaultTolerant 
   hadoop.hdfs.server.federation.router.TestRouterRpc 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.yarn.sls.TestSLSStreamAMSynth 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1439/artifact/out/diff-compile-cc-root.txt
  [8.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1439/artifact/out/diff-compile-javac-root.txt
  [428K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1439/artifact/out/diff-checkstyle-root.txt
  [16M]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1439/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1439/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1439/artifact/out/diff-patch-shellcheck.txt
  [16K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1439/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1439/artifact/out/whitespace-eol.txt
  [9.9M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1439/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   

Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-03-15 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/625/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.web.TestWebHDFS 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.yarn.client.api.impl.TestTimelineClientV2Impl 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.sls.TestSLSRunner 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/625/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/625/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/625/artifact/out/diff-compile-cc-root-jdk1.8.0_242.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/625/artifact/out/diff-compile-javac-root-jdk1.8.0_242.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/625/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/625/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/625/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/625/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/625/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/625/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/625/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/625/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/625/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/625/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/625/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/625/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_242.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/625/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [236K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/625/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/625/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [36K]
   

Re: [VOTE] Release Apache Hadoop Thirdparty 1.0.0 - RC1

2020-03-15 Thread Surendra Singh Lilhore
+1

-Built trunk with -Dhadoop-thirdparty-protobuf.version=1.0.0
-Verified artifacts available in repo.


Thanks
Surendra


On Fri, Mar 13, 2020 at 9:51 AM Akira Ajisaka  wrote:

> +1
>
> - Verified signatures and checksums
> - Built jars and docs from source
> - Built hadoop trunk with hadoop-thirdparty 1.0.0
> - Checked rat files and documents
> - Checked LICENSE and NOTICE files
>
> Thanks,
> Akira
>
> On Thu, Mar 12, 2020 at 5:26 AM Vinayakumar B 
> wrote:
>
> > Hi folks,
> >
> > Thanks to everyone's help on this release.
> >
> > I have re-created a release candidate (RC1) for Apache Hadoop Thirdparty
> > 1.0.0.
> >
> > RC Release artifacts are available at :
> >
> >
> http://home.apache.org/~vinayakumarb/release/hadoop-thirdparty-1.0.0-RC1/
> >
> > Maven artifacts are available in staging repo:
> >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1261/
> >
> > The RC tag in git is here:
> > https://github.com/apache/hadoop-thirdparty/tree/release-1.0.0-RC1
> >
> > And my public key is at:
> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >
> > *This vote will run for 5 days, ending on March 18th 2020 at 11:59 pm
> IST.*
> >
> > For the testing, I have verified Hadoop trunk compilation with
> >"-DdistMgmtSnapshotsUrl=
> > https://repository.apache.org/content/repositories/orgapachehadoop-1261/
> >  -Dhadoop-thirdparty-protobuf.version=1.0.0"
> >
> > My +1 to start.
> >
> > -Vinay
> >
>