Re: This week's Hadoop storage community call
+cc: yarn-dev/mapreduce-dev since it contains the information for Hadoop 3.3.0 release Thank you all for the discussion and thanks Wei-Chiu for the summary. > I created a dashboard to track the Hadoop 3.3.0 release: > https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12335311 > It is public to any one, and editable for committers. This dashboard looks very nice. Thanks. > Nanda raised a question about how would future release managers access ARM > machines in order to make ARM release bits. Making ARM release seems difficult for ASF official release because both physical and admin access is required. http://www.apache.org/legal/release-policy.html#owned-controlled-hardware Regards, Akira On Fri, Mar 6, 2020 at 3:38 AM Wei-Chiu Chuang wrote: > Summary: > > I created a dashboard to track the Hadoop 3.3.0 release: > https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12335311 > It is public to any one, and editable for committers. > > We discussed the current progress. 3.3.0 is looking good. Brahma is > planning to cut a branch by the end of next week (correct me if I got it > wrong). > Additionally, ARM build is running successfully these days and Brahma wants > to put up an ARM version of release bits in Hadoop 3.3.0 > Nanda raised a question about how would future release managers access ARM > machines in order to make ARM release bits. > > We also discussed the branch-2.9 EOL discussion. Brahma suggested to invite > user@ mailing list to join the discussion. > > On Mon, Mar 2, 2020 at 1:59 PM Wei-Chiu Chuang wrote: > > > Hi! > > > > I'd like to use this week's community call as an opportunity to drive the > > releases forward. RMs: Gabor and Brahma, would you be able to join the > call? > > > > March 4th (Wednesday) US Pacific: 10am, GMT 6pm, India: 11:30pm > > > > Please join via Zoom: > > https://cloudera.zoom.us/j/880548968 > > > > Past meeting minutes > > > > > https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit > > > > >
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1429/ [Mar 4, 2020 2:07:31 AM] (github) HADOOP-16899. Update HdfsDesign.md to reduce ambiguity. (#1871) [Mar 4, 2020 6:02:54 PM] (inigoiri) HDFS-15204. TestRetryCacheWithHA testRemoveCacheDescriptor fails [Mar 4, 2020 6:13:23 PM] (inigoiri) HDFS-14977. Quota Usage and Content summary are not same in Truncate [Mar 4, 2020 11:31:57 PM] (ebadger) YARN-10173. Make pid file generation timeout configurable in case of -1 overall The following subsystems voted -1: asflicense findbugs pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml FindBugs : module:hadoop-cloud-storage-project/hadoop-cos Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66] org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87] Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178] org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged Failed junit tests : hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate hadoop.hdfs.server.namenode.TestFileContextXAttr hadoop.hdfs.server.namenode.TestQuotaWithStripedBlocksWithRandomECPolicy hadoop.hdfs.server.namenode.TestDecommissioningStatus hadoop.hdfs.server.namenode.snapshot.TestSnapshot hadoop.hdfs.server.namenode.TestFSNamesystemLock hadoop.hdfs.server.namenode.TestFsck hadoop.hdfs.server.namenode.TestReencryptionWithKMS hadoop.hdfs.server.namenode.snapshot.TestSnapshotNameWithInvalidCharacters hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion hadoop.hdfs.server.namenode.TestReencryption hadoop.hdfs.server.namenode.TestRefreshNamenodeReplicationConfig hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots hadoop.hdfs.server.namenode.TestFSDirectory hadoop.hdfs.server.namenode.TestFileTruncate hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics hadoop.hdfs.server.namenode.snapshot.TestFileContextSnapshot
Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/615/ [Mar 4, 2020 6:07:08 PM] (inigoiri) HDFS-15204. TestRetryCacheWithHA testRemoveCacheDescriptor fails -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] Failed junit tests : hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.registry.secure.TestSecureLogins hadoop.yarn.client.api.impl.TestAMRMProxy hadoop.yarn.client.api.impl.TestAMRMClient cc: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/615/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/615/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt [328K] cc: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/615/artifact/out/diff-compile-cc-root-jdk1.8.0_242.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/615/artifact/out/diff-compile-javac-root-jdk1.8.0_242.txt [308K] checkstyle: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/615/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/615/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/615/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/615/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/615/artifact/out/diff-patch-shellcheck.txt [56K] shelldocs: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/615/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/615/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/615/artifact/out/whitespace-tabs.txt [1.3M] xml: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/615/artifact/out/xml.txt [12K] findbugs: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/615/artifact/out/branch-findbugs-hadoop-common-project_hadoop-annotations.txt [0] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/615/artifact/out/branch-findbugs-hadoop-maven-plugins.txt [0] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/615/artifact/out/branch-findbugs-hadoop-common-project_hadoop-minikdc.txt [0] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/615/artifact/out/branch-findbugs-hadoop-common-project_hadoop-auth.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/615/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/615/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt [16K] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/615/artifact/out/diff-javadoc-jav
Re: [ANNOUNCE] New Apache Hadoop Committer - Stephen O'Donnell
Congratulations Stephen! - Hexiaoqiao On Thu, Mar 5, 2020 at 11:08 AM Masatake Iwasaki < iwasak...@oss.nttdata.co.jp> wrote: > Congratulations! > > Masatake Iwasaki > > > On 2020/03/04 5:11, Wei-Chiu Chuang wrote: > > In bcc: general@ > > > > It's my pleasure to announce that Stephen O'Donnell has been elected as > > committer on the Apache Hadoop project recognizing his continued > > contributions to the > > project. > > > > Please join me in congratulating him. > > > > Hearty Congratulations & Welcome aboard Stephen! > > > > Wei-Chiu Chuang > > (On behalf of the Hadoop PMC) > > > > - > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > >