Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
Thanks Andrew! +1 (binding) * Verified signatures and checksums * Built from source on CentOS 7.2 with OpenJDK 1.8.0_131 with -Pnative * Deployed pseudo-distributed cluster and ran some example job * The change log and the release notes look good. Regards, Akira On 2017/06/30 11:40, Andrew Wang wrote: Hi all, As always, thanks to the many, many contributors who helped with this release! I've prepared an RC0 for 3.0.0-alpha4: http://home.apache.org/~wang/3.0.0-alpha4-RC0/ The standard 5-day vote would run until midnight on Tuesday, July 4th. Given that July 4th is a holiday in the US, I expect this vote might have to be extended, but I'd like to close the vote relatively soon after. I've done my traditional testing of a pseudo-distributed cluster with a single task pi job, which was successful. Normally my testing would end there, but I'm slightly more confident this time. At Cloudera, we've successfully packaged and deployed a snapshot from a few days ago, and run basic smoke tests. Some bugs found from this include HDFS-11956, which fixes backwards compat with Hadoop 2 clients, and the revert of HDFS-11696, which broke NN QJM HA setup. Vijay is working on a test run with a fuller test suite (the results of which we can hopefully post soon). My +1 to start, Best, Andrew - To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
RE: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
+1 (non-binding) -Verified checksums and signatures -Built from source and installed HA cluster -Ran basic shell operations -Ran sample jobs. -Verified the HttpFSServerWebServer. -Brahma Reddy Battula -Original Message- From: Andrew Wang [mailto:andrew.w...@cloudera.com] Sent: 30 June 2017 10:41 To: common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; mapreduce-dev@hadoop.apache.org; yarn-...@hadoop.apache.org Subject: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0 Hi all, As always, thanks to the many, many contributors who helped with this release! I've prepared an RC0 for 3.0.0-alpha4: http://home.apache.org/~wang/3.0.0-alpha4-RC0/ The standard 5-day vote would run until midnight on Tuesday, July 4th. Given that July 4th is a holiday in the US, I expect this vote might have to be extended, but I'd like to close the vote relatively soon after. I've done my traditional testing of a pseudo-distributed cluster with a single task pi job, which was successful. Normally my testing would end there, but I'm slightly more confident this time. At Cloudera, we've successfully packaged and deployed a snapshot from a few days ago, and run basic smoke tests. Some bugs found from this include HDFS-11956, which fixes backwards compat with Hadoop 2 clients, and the revert of HDFS-11696, which broke NN QJM HA setup. Vijay is working on a test run with a fuller test suite (the results of which we can hopefully post soon). My +1 to start, Best, Andrew
Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
Thanks Andrew! - Deployed binary artifacts in a pseudo-distributed cluster (MacOS Sierra, Java 1.8.0_91) - Ran pi job - Clicked around the web UIs - Tried log aggregation - Played a bit with HDFS - Tried yarn top +1 (binding) - Robert On Thu, Jul 6, 2017 at 4:21 PM, Hanisha Koneruwrote: > Thanks for the hard work Andrew! > > - Built from source on Mac OS X 10.11.6 with Java 1.8.0_91 > - Built from source on CentOS Linux 7.3.161, with Java 1.8.0_92, with and > without native > - Deployed a 10 node cluster on docker containers > - Tested basic dfs operations > - Tested basic erasure coding (adding files, recovering corrupted > files) > - Tested some dfsadmin operations : report, triggerblockreport > > +1 (non-binding) > > > > Thanks, > Hanisha > > > > > > > > > On 7/6/17, 3:57 PM, "Lei Xu" wrote: > > >+1 (binding) > > > >Ran the following tests: > >* Deploy a pesudo cluster using tar ball, run pi. > >* Verified MD5 of tar balls for both src and dist. > >* Build src tarball with -Pdist,tar > > > >Thanks Andrew for the efforts! > > > >On Thu, Jul 6, 2017 at 3:44 PM, Andrew Wang > wrote: > >> Thanks all for the votes so far! > >> > >> I think we're still at a single binding +1 from myself, so I'll leave > this > >> vote open until we reach the minimum threshold of 3. I'm still hoping to > >> can push the release out before the weekend. > >> > >> On Thu, Jul 6, 2017 at 2:58 PM, Vijaya Krishna Kalluru Subbarao < > >> vij...@cloudera.com> wrote: > >> > >>> Ran Smokes and BVTs covering basic sanity testing(10+ tests ran) for > all > >>> these components: > >>> > >>>- Mapreduce(compression, archives, pipes, JHS), > >>>- Avro(AvroMapreduce, HadoopAvro, HiveAvro, SqoopAvro), > >>>- HBase(Balancer, compression, ImportExport, Snapshots, Schema > >>>change), > >>>- Oozie(Hive, Pig, Spark), > >>>- Pig(PigAvro, PigParquet, PigCompression), > >>>- Search(SolrCtlBasic, SolrRequestForwading, SolrSSLConfiguration). > >>> > >>> +1 non-binding. > >>> > >>> Regards, > >>> Vijay > >>> > >>> On Thu, Jul 6, 2017 at 2:39 PM, Eric Badger > >>> > wrote: > >>> > - Verified all checksums signatures > - Built from src on macOS 10.12.5 with Java 1.8.0u65 > - Deployed single node pseudo cluster > - Successfully ran sleep and pi jobs > - Navigated the various UIs > > +1 (non-binding) > > Thanks, > > Eric > > On Thursday, July 6, 2017 3:31 PM, Aaron Fabbri > wrote: > > > > Thanks for the hard work on this! +1 (non-binding) > > - Built from source tarball on OS X w/ Java 1.8.0_45. > - Deployed mini/pseudo cluster. > - Ran grep and wordcount examples. > - Poked around ResourceManager and JobHistory UIs. > - Ran all s3a integration tests in US West 2. > > > > On Thu, Jul 6, 2017 at 10:20 AM, Xiao Chen wrote: > > > Thanks Andrew! > > +1 (non-binding) > > > >- Verified md5's, checked tarball sizes are reasonable > >- Built source tarball and deployed a pseudo-distributed cluster > with > >hdfs/kms > >- Tested basic hdfs/kms operations > >- Sanity checked webuis/logs > > > > > > -Xiao > > > > On Wed, Jul 5, 2017 at 10:33 PM, John Zhuge > wrote: > > > > > +1 (non-binding) > > > > > > > > >- Verified checksums and signatures of the tarballs > > >- Built source with native, Java 1.8.0_131 on Mac OS X 10.12.5 > > >- Cloud connectors: > > > - A few S3A integration tests > > > - A few ADL live unit tests > > >- Deployed both binary and built source to a pseudo cluster, > passed > > the > > >following sanity tests in insecure, SSL, and SSL+Kerberos mode: > > > - HDFS basic and ACL > > > - DistCp basic > > > - WordCount (skipped in Kerberos mode) > > > - KMS and HttpFS basic > > > > > > Thanks Andrew for the great effort! > > > > > > On Wed, Jul 5, 2017 at 1:33 PM, Eric Payne < > erichadoo...@yahoo.com. > > > invalid> > > > wrote: > > > > > > > Thanks Andrew. > > > > I downloaded the source, built it, and installed it onto a > pseudo > > > > distributed 4-node cluster. > > > > > > > > I ran mapred and streaming test cases, including sleep and > wordcount. > > > > +1 (non-binding) > > > > -Eric > > > > > > > > From: Andrew Wang > > > > To: "common-...@hadoop.apache.org" < > common-...@hadoop.apache.org>; > " > > > > hdfs-...@hadoop.apache.org" ; " > > > > mapreduce-dev@hadoop.apache.org"
Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
Thanks for the hard work Andrew! - Built from source on Mac OS X 10.11.6 with Java 1.8.0_91 - Built from source on CentOS Linux 7.3.161, with Java 1.8.0_92, with and without native - Deployed a 10 node cluster on docker containers - Tested basic dfs operations - Tested basic erasure coding (adding files, recovering corrupted files) - Tested some dfsadmin operations : report, triggerblockreport +1 (non-binding) Thanks, Hanisha On 7/6/17, 3:57 PM, "Lei Xu"wrote: >+1 (binding) > >Ran the following tests: >* Deploy a pesudo cluster using tar ball, run pi. >* Verified MD5 of tar balls for both src and dist. >* Build src tarball with -Pdist,tar > >Thanks Andrew for the efforts! > >On Thu, Jul 6, 2017 at 3:44 PM, Andrew Wang wrote: >> Thanks all for the votes so far! >> >> I think we're still at a single binding +1 from myself, so I'll leave this >> vote open until we reach the minimum threshold of 3. I'm still hoping to >> can push the release out before the weekend. >> >> On Thu, Jul 6, 2017 at 2:58 PM, Vijaya Krishna Kalluru Subbarao < >> vij...@cloudera.com> wrote: >> >>> Ran Smokes and BVTs covering basic sanity testing(10+ tests ran) for all >>> these components: >>> >>>- Mapreduce(compression, archives, pipes, JHS), >>>- Avro(AvroMapreduce, HadoopAvro, HiveAvro, SqoopAvro), >>>- HBase(Balancer, compression, ImportExport, Snapshots, Schema >>>change), >>>- Oozie(Hive, Pig, Spark), >>>- Pig(PigAvro, PigParquet, PigCompression), >>>- Search(SolrCtlBasic, SolrRequestForwading, SolrSSLConfiguration). >>> >>> +1 non-binding. >>> >>> Regards, >>> Vijay >>> >>> On Thu, Jul 6, 2017 at 2:39 PM, Eric Badger >> > wrote: >>> - Verified all checksums signatures - Built from src on macOS 10.12.5 with Java 1.8.0u65 - Deployed single node pseudo cluster - Successfully ran sleep and pi jobs - Navigated the various UIs +1 (non-binding) Thanks, Eric On Thursday, July 6, 2017 3:31 PM, Aaron Fabbri wrote: Thanks for the hard work on this! +1 (non-binding) - Built from source tarball on OS X w/ Java 1.8.0_45. - Deployed mini/pseudo cluster. - Ran grep and wordcount examples. - Poked around ResourceManager and JobHistory UIs. - Ran all s3a integration tests in US West 2. On Thu, Jul 6, 2017 at 10:20 AM, Xiao Chen wrote: > Thanks Andrew! > +1 (non-binding) > >- Verified md5's, checked tarball sizes are reasonable >- Built source tarball and deployed a pseudo-distributed cluster with >hdfs/kms >- Tested basic hdfs/kms operations >- Sanity checked webuis/logs > > > -Xiao > > On Wed, Jul 5, 2017 at 10:33 PM, John Zhuge wrote: > > > +1 (non-binding) > > > > > >- Verified checksums and signatures of the tarballs > >- Built source with native, Java 1.8.0_131 on Mac OS X 10.12.5 > >- Cloud connectors: > > - A few S3A integration tests > > - A few ADL live unit tests > >- Deployed both binary and built source to a pseudo cluster, passed > the > >following sanity tests in insecure, SSL, and SSL+Kerberos mode: > > - HDFS basic and ACL > > - DistCp basic > > - WordCount (skipped in Kerberos mode) > > - KMS and HttpFS basic > > > > Thanks Andrew for the great effort! > > > > On Wed, Jul 5, 2017 at 1:33 PM, Eric Payne > invalid> > > wrote: > > > > > Thanks Andrew. > > > I downloaded the source, built it, and installed it onto a pseudo > > > distributed 4-node cluster. > > > > > > I ran mapred and streaming test cases, including sleep and wordcount. > > > +1 (non-binding) > > > -Eric > > > > > > From: Andrew Wang > > > To: "common-...@hadoop.apache.org" ; " > > > hdfs-...@hadoop.apache.org" ; " > > > mapreduce-dev@hadoop.apache.org" ; " > > > yarn-...@hadoop.apache.org" > > > Sent: Thursday, June 29, 2017 9:41 PM > > > Subject: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0 > > > > > > Hi all, > > > > > > As always, thanks to the many, many contributors who helped with this > > > release! I've prepared an RC0 for 3.0.0-alpha4: > > > > > > http://home.apache.org/~wang/3.0.0-alpha4-RC0/ > > > > > > The standard 5-day vote would run until midnight on Tuesday, July 4th. > > > Given that July 4th is a holiday in the US, I
Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
+1 (binding) Ran the following tests: * Deploy a pesudo cluster using tar ball, run pi. * Verified MD5 of tar balls for both src and dist. * Build src tarball with -Pdist,tar Thanks Andrew for the efforts! On Thu, Jul 6, 2017 at 3:44 PM, Andrew Wangwrote: > Thanks all for the votes so far! > > I think we're still at a single binding +1 from myself, so I'll leave this > vote open until we reach the minimum threshold of 3. I'm still hoping to > can push the release out before the weekend. > > On Thu, Jul 6, 2017 at 2:58 PM, Vijaya Krishna Kalluru Subbarao < > vij...@cloudera.com> wrote: > >> Ran Smokes and BVTs covering basic sanity testing(10+ tests ran) for all >> these components: >> >>- Mapreduce(compression, archives, pipes, JHS), >>- Avro(AvroMapreduce, HadoopAvro, HiveAvro, SqoopAvro), >>- HBase(Balancer, compression, ImportExport, Snapshots, Schema >>change), >>- Oozie(Hive, Pig, Spark), >>- Pig(PigAvro, PigParquet, PigCompression), >>- Search(SolrCtlBasic, SolrRequestForwading, SolrSSLConfiguration). >> >> +1 non-binding. >> >> Regards, >> Vijay >> >> On Thu, Jul 6, 2017 at 2:39 PM, Eric Badger > > wrote: >> >>> - Verified all checksums signatures >>> - Built from src on macOS 10.12.5 with Java 1.8.0u65 >>> - Deployed single node pseudo cluster >>> - Successfully ran sleep and pi jobs >>> - Navigated the various UIs >>> >>> +1 (non-binding) >>> >>> Thanks, >>> >>> Eric >>> >>> On Thursday, July 6, 2017 3:31 PM, Aaron Fabbri >>> wrote: >>> >>> >>> >>> Thanks for the hard work on this! +1 (non-binding) >>> >>> - Built from source tarball on OS X w/ Java 1.8.0_45. >>> - Deployed mini/pseudo cluster. >>> - Ran grep and wordcount examples. >>> - Poked around ResourceManager and JobHistory UIs. >>> - Ran all s3a integration tests in US West 2. >>> >>> >>> >>> On Thu, Jul 6, 2017 at 10:20 AM, Xiao Chen wrote: >>> >>> > Thanks Andrew! >>> > +1 (non-binding) >>> > >>> >- Verified md5's, checked tarball sizes are reasonable >>> >- Built source tarball and deployed a pseudo-distributed cluster with >>> >hdfs/kms >>> >- Tested basic hdfs/kms operations >>> >- Sanity checked webuis/logs >>> > >>> > >>> > -Xiao >>> > >>> > On Wed, Jul 5, 2017 at 10:33 PM, John Zhuge >>> wrote: >>> > >>> > > +1 (non-binding) >>> > > >>> > > >>> > >- Verified checksums and signatures of the tarballs >>> > >- Built source with native, Java 1.8.0_131 on Mac OS X 10.12.5 >>> > >- Cloud connectors: >>> > > - A few S3A integration tests >>> > > - A few ADL live unit tests >>> > >- Deployed both binary and built source to a pseudo cluster, passed >>> > the >>> > >following sanity tests in insecure, SSL, and SSL+Kerberos mode: >>> > > - HDFS basic and ACL >>> > > - DistCp basic >>> > > - WordCount (skipped in Kerberos mode) >>> > > - KMS and HttpFS basic >>> > > >>> > > Thanks Andrew for the great effort! >>> > > >>> > > On Wed, Jul 5, 2017 at 1:33 PM, Eric Payne >> > > invalid> >>> > > wrote: >>> > > >>> > > > Thanks Andrew. >>> > > > I downloaded the source, built it, and installed it onto a pseudo >>> > > > distributed 4-node cluster. >>> > > > >>> > > > I ran mapred and streaming test cases, including sleep and >>> wordcount. >>> > > > +1 (non-binding) >>> > > > -Eric >>> > > > >>> > > > From: Andrew Wang >>> > > > To: "common-...@hadoop.apache.org" ; >>> " >>> > > > hdfs-...@hadoop.apache.org" ; " >>> > > > mapreduce-dev@hadoop.apache.org" ; >>> " >>> > > > yarn-...@hadoop.apache.org" >>> > > > Sent: Thursday, June 29, 2017 9:41 PM >>> > > > Subject: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0 >>> > > > >>> > > > Hi all, >>> > > > >>> > > > As always, thanks to the many, many contributors who helped with >>> this >>> > > > release! I've prepared an RC0 for 3.0.0-alpha4: >>> > > > >>> > > > http://home.apache.org/~wang/3.0.0-alpha4-RC0/ >>> > > > >>> > > > The standard 5-day vote would run until midnight on Tuesday, July >>> 4th. >>> > > > Given that July 4th is a holiday in the US, I expect this vote might >>> > have >>> > > > to be extended, but I'd like to close the vote relatively soon >>> after. >>> > > > >>> > > > I've done my traditional testing of a pseudo-distributed cluster >>> with a >>> > > > single task pi job, which was successful. >>> > > > >>> > > > Normally my testing would end there, but I'm slightly more confident >>> > this >>> > > > time. At Cloudera, we've successfully packaged and deployed a >>> snapshot >>> > > from >>> > > > a few days ago, and run basic smoke tests. Some bugs found from this >>> > > > include HDFS-11956, which fixes backwards compat with Hadoop 2 >>> clients, >>> > > and >>> > > >
Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
- Verified all checksums signatures - Built from src on macOS 10.12.5 with Java 1.8.0u65 - Deployed single node pseudo cluster - Successfully ran sleep and pi jobs - Navigated the various UIs +1 (non-binding) Thanks, Eric On Thursday, July 6, 2017 3:31 PM, Aaron Fabbriwrote: Thanks for the hard work on this! +1 (non-binding) - Built from source tarball on OS X w/ Java 1.8.0_45. - Deployed mini/pseudo cluster. - Ran grep and wordcount examples. - Poked around ResourceManager and JobHistory UIs. - Ran all s3a integration tests in US West 2. On Thu, Jul 6, 2017 at 10:20 AM, Xiao Chen wrote: > Thanks Andrew! > +1 (non-binding) > >- Verified md5's, checked tarball sizes are reasonable >- Built source tarball and deployed a pseudo-distributed cluster with >hdfs/kms >- Tested basic hdfs/kms operations >- Sanity checked webuis/logs > > > -Xiao > > On Wed, Jul 5, 2017 at 10:33 PM, John Zhuge wrote: > > > +1 (non-binding) > > > > > >- Verified checksums and signatures of the tarballs > >- Built source with native, Java 1.8.0_131 on Mac OS X 10.12.5 > >- Cloud connectors: > > - A few S3A integration tests > > - A few ADL live unit tests > >- Deployed both binary and built source to a pseudo cluster, passed > the > >following sanity tests in insecure, SSL, and SSL+Kerberos mode: > > - HDFS basic and ACL > > - DistCp basic > > - WordCount (skipped in Kerberos mode) > > - KMS and HttpFS basic > > > > Thanks Andrew for the great effort! > > > > On Wed, Jul 5, 2017 at 1:33 PM, Eric Payne > invalid> > > wrote: > > > > > Thanks Andrew. > > > I downloaded the source, built it, and installed it onto a pseudo > > > distributed 4-node cluster. > > > > > > I ran mapred and streaming test cases, including sleep and wordcount. > > > +1 (non-binding) > > > -Eric > > > > > > From: Andrew Wang > > > To: "common-...@hadoop.apache.org" ; " > > > hdfs-...@hadoop.apache.org" ; " > > > mapreduce-dev@hadoop.apache.org" ; " > > > yarn-...@hadoop.apache.org" > > > Sent: Thursday, June 29, 2017 9:41 PM > > > Subject: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0 > > > > > > Hi all, > > > > > > As always, thanks to the many, many contributors who helped with this > > > release! I've prepared an RC0 for 3.0.0-alpha4: > > > > > > http://home.apache.org/~wang/3.0.0-alpha4-RC0/ > > > > > > The standard 5-day vote would run until midnight on Tuesday, July 4th. > > > Given that July 4th is a holiday in the US, I expect this vote might > have > > > to be extended, but I'd like to close the vote relatively soon after. > > > > > > I've done my traditional testing of a pseudo-distributed cluster with a > > > single task pi job, which was successful. > > > > > > Normally my testing would end there, but I'm slightly more confident > this > > > time. At Cloudera, we've successfully packaged and deployed a snapshot > > from > > > a few days ago, and run basic smoke tests. Some bugs found from this > > > include HDFS-11956, which fixes backwards compat with Hadoop 2 clients, > > and > > > the revert of HDFS-11696, which broke NN QJM HA setup. > > > > > > Vijay is working on a test run with a fuller test suite (the results of > > > which we can hopefully post soon). > > > > > > My +1 to start, > > > > > > Best, > > > Andrew > > > > > > > > > > > > > > > > > > > > -- > > John > > > - To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/ [Jul 5, 2017 6:10:57 PM] (liuml07) HDFS-12089. Fix ambiguous NN retry log message in WebHDFS. Contributed [Jul 5, 2017 6:16:56 PM] (jzhuge) HADOOP-14608. KMS JMX servlet path not backwards compatible. Contributed -1 overall The following subsystems voted -1: compile mvninstall unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.ha.TestZKFailoverControllerStress hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.server.namenode.ha.TestHASafeMode hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService hadoop.yarn.server.nodemanager.TestNodeManagerShutdown hadoop.yarn.server.timeline.TestRollingLevelDB hadoop.yarn.server.timeline.TestTimelineDataManager hadoop.yarn.server.timeline.TestLeveldbTimelineStore hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore hadoop.yarn.server.resourcemanager.TestRMRestart hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.client.api.impl.TestNMClient hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapred.TestShuffleHandler hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService hadoop.yarn.sls.nodemanager.TestNMSimulator Timed out junit tests : org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA mvninstall: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-mvninstall-root.txt [616K] compile: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-compile-root.txt [20K] cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-compile-root.txt [20K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-compile-root.txt [20K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-unit-hadoop-assemblies.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [152K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [792K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [56K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt [68K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [76K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [324K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt [16K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt [28K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/367/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K]
Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
Thanks Andrew! +1 (non-binding) - Verified md5's, checked tarball sizes are reasonable - Built source tarball and deployed a pseudo-distributed cluster with hdfs/kms - Tested basic hdfs/kms operations - Sanity checked webuis/logs -Xiao On Wed, Jul 5, 2017 at 10:33 PM, John Zhugewrote: > +1 (non-binding) > > >- Verified checksums and signatures of the tarballs >- Built source with native, Java 1.8.0_131 on Mac OS X 10.12.5 >- Cloud connectors: > - A few S3A integration tests > - A few ADL live unit tests >- Deployed both binary and built source to a pseudo cluster, passed the >following sanity tests in insecure, SSL, and SSL+Kerberos mode: > - HDFS basic and ACL > - DistCp basic > - WordCount (skipped in Kerberos mode) > - KMS and HttpFS basic > > Thanks Andrew for the great effort! > > On Wed, Jul 5, 2017 at 1:33 PM, Eric Payne invalid> > wrote: > > > Thanks Andrew. > > I downloaded the source, built it, and installed it onto a pseudo > > distributed 4-node cluster. > > > > I ran mapred and streaming test cases, including sleep and wordcount. > > +1 (non-binding) > > -Eric > > > > From: Andrew Wang > > To: "common-...@hadoop.apache.org" ; " > > hdfs-...@hadoop.apache.org" ; " > > mapreduce-dev@hadoop.apache.org" ; " > > yarn-...@hadoop.apache.org" > > Sent: Thursday, June 29, 2017 9:41 PM > > Subject: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0 > > > > Hi all, > > > > As always, thanks to the many, many contributors who helped with this > > release! I've prepared an RC0 for 3.0.0-alpha4: > > > > http://home.apache.org/~wang/3.0.0-alpha4-RC0/ > > > > The standard 5-day vote would run until midnight on Tuesday, July 4th. > > Given that July 4th is a holiday in the US, I expect this vote might have > > to be extended, but I'd like to close the vote relatively soon after. > > > > I've done my traditional testing of a pseudo-distributed cluster with a > > single task pi job, which was successful. > > > > Normally my testing would end there, but I'm slightly more confident this > > time. At Cloudera, we've successfully packaged and deployed a snapshot > from > > a few days ago, and run basic smoke tests. Some bugs found from this > > include HDFS-11956, which fixes backwards compat with Hadoop 2 clients, > and > > the revert of HDFS-11696, which broke NN QJM HA setup. > > > > Vijay is working on a test run with a fuller test suite (the results of > > which we can hopefully post soon). > > > > My +1 to start, > > > > Best, > > Andrew > > > > > > > > > > > > -- > John >
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/456/ [Jul 5, 2017 10:35:18 AM] (vinayakumarb) HADOOP-13414. Hide Jetty Server version header in HTTP responses. [Jul 5, 2017 6:10:57 PM] (liuml07) HDFS-12089. Fix ambiguous NN retry log message in WebHDFS. Contributed [Jul 5, 2017 6:16:56 PM] (jzhuge) HADOOP-14608. KMS JMX servlet path not backwards compatible. Contributed -1 overall The following subsystems voted -1: findbugs unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-hdfs-project/hadoop-hdfs-client Possible exposure of partially initialized object in org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At DFSClient.java:object in org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At DFSClient.java:[line 2888] org.apache.hadoop.hdfs.server.protocol.SlowDiskReports.equals(Object) makes inefficient use of keySet iterator instead of entrySet iterator At SlowDiskReports.java:keySet iterator instead of entrySet iterator At SlowDiskReports.java:[line 105] FindBugs : module:hadoop-hdfs-project/hadoop-hdfs Possible null pointer dereference in org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to return value of called method Dereferenced at JournalNode.java:org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to return value of called method Dereferenced at JournalNode.java:[line 302] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setClusterId(String) unconditionally sets the field clusterId At HdfsServerConstants.java:clusterId At HdfsServerConstants.java:[line 193] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForce(int) unconditionally sets the field force At HdfsServerConstants.java:force At HdfsServerConstants.java:[line 217] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForceFormat(boolean) unconditionally sets the field isForceFormat At HdfsServerConstants.java:isForceFormat At HdfsServerConstants.java:[line 229] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setInteractiveFormat(boolean) unconditionally sets the field isInteractiveFormat At HdfsServerConstants.java:isInteractiveFormat At HdfsServerConstants.java:[line 237] Possible null pointer dereference in org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, int, HardLink, boolean, File, List) due to return value of called method Dereferenced at DataStorage.java:org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, int, HardLink, boolean, File, List) due to return value of called method Dereferenced at DataStorage.java:[line 1339] Possible null pointer dereference in org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String, long) due to return value of called method Dereferenced at NNStorageRetentionManager.java:org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String, long) due to return value of called method Dereferenced at NNStorageRetentionManager.java:[line 258] Possible null pointer dereference in org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path, BasicFileAttributes) due to return value of called method Dereferenced at NNUpgradeUtil.java:org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path, BasicFileAttributes) due to return value of called method Dereferenced at NNUpgradeUtil.java:[line 133] Useless condition:argv.length >= 1 at this point At DFSAdmin.java:[line 2085] Useless condition:numBlocks == -1 at this point At ImageLoaderCurrent.java:[line 727] FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager Useless object stored in variable removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:[line 642] org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache() makes inefficient use of keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:[line 719] Hard coded reference to an absolute pathname in