[jira] [Created] (HADOOP-16888) Support JDK11 in the precommit job
Akira Ajisaka created HADOOP-16888: -- Summary: Support JDK11 in the precommit job Key: HADOOP-16888 URL: https://issues.apache.org/jira/browse/HADOOP-16888 Project: Hadoop Common Issue Type: Sub-task Components: build Reporter: Akira Ajisaka Assignee: Akira Ajisaka -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
This week's Hadoop storage community online sync (APAC Mandarin)
Hi! It's that time again. I'd like to lead this week's APAC Mandarin community sync discussion. There are a few things to discuss/announce: (1) user-zh mailing list. (2) Fate of Hadoop 2.x / Hadoop 3.x adoption. (3) Apache jiras pending reviews Zoom link: https://cloudera.zoom.us/j/880548968 Time/Date: Feb 26 10PM (US West Coast) / Feb 27 2PM (Beijing) Past meeting minutes: https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit?usp=sharing
Re: Hadoop 3.3 Release Plan Proposal
Hi All, Inline with the original 3.3.0 communication proposal dated 8th Jan 2020, I would like to provide more updates[1]. We are approaching previously proposed code freeze date (March 10,2020). So I would like to cut 3.3 branch on *10th March* and point existing *trunk to 3.4* if there are no issues. *Current Release Plan:* *Feature freeze Date *: all features to merge by Feb 28, 2020.( it's almost done) *Code freeze Date *: blockers/critical only, no improvements and blocker/critical bug-fixes March 10, 2020 [2] ( as of now only 12 issues are there which are very old and I am tracking same.) *Release Date*: March 15, 2020. *Please let me know if I missed anything.* 1. https://cwiki.apache.org/confluence/display/HADOOP/Roadmap#Roadmap-3.3.0 2.project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in (Blocker, Critical) AND resolution = Unresolved AND "Target Version/s" = 3.3.0 ORDER BY priority DESC On Wed, Jan 22, 2020 at 11:22 PM Brahma Reddy Battula wrote: > > Wiki was updated for 3.3 > https://cwiki.apache.org/confluence/display/HADOOP/Roadmap#Roadmap-3.3.0. > > > >I'll move out anything that isn't needed. > > thanks steve. > > > We need to fix the shaded protobuf in > > Token issue to even get spark to compile. > > Looks this is done. https://issues.apache.org/jira/browse/HADOOP-16621 > > On Wed, Jan 8, 2020 at 7:41 PM Steve Loughran > wrote: > >> > >> > 2. Features close to finish: >> > >> > >> > *HADOOP-15620: Über-jira: S3A phase VI: Hadoop 3.3 features. ( >> owner >> > : Steve Loughran) >> > *HADOOP-15763: Über-JIRA: abfs phase II: Hadoop 3.3 features & >> > fixes. ( owner : Steve Loughran) >> > *HADOOP-15619:Über-JIRA: S3Guard Phase IV: Hadoop 3.3 features. ( >> > owner : Steve Loughran) >> > >> > I'll move out anything that isn't needed. >> >> FWIW, most of these are in CDP 1.x, so there's been reasonable testing and >> I've got some provisional tuning to do. That is -if things didn't work in >> the test/production deployments, I'd know about the regressions (e.g. >> HADOOP-16751). >> >> This is S3A and ABFS code -no idea about the rest, and inevitably the big >> JAR changes will have surprises. We need to fix the shaded protobuf in >> Token issue to even get spark to compile. >> >> -Steve >> >> > >> > >> > > > -- > --Brahma Reddy Battula > -- --Brahma Reddy Battula
[jira] [Created] (HADOOP-16887) [OpenTracing] Add doc
Wei-Chiu Chuang created HADOOP-16887: Summary: [OpenTracing] Add doc Key: HADOOP-16887 URL: https://issues.apache.org/jira/browse/HADOOP-16887 Project: Hadoop Common Issue Type: Sub-task Reporter: Wei-Chiu Chuang We should remove this doc https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Tracing.html and replace it with the OT usage in Hadoop. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1421/ [Feb 24, 2020 2:47:01 PM] (ayushsaxena) HDFS-15166. Remove redundant field fStream in ByteStringLog. Contributed [Feb 24, 2020 3:08:04 PM] (ayushsaxena) HDFS-15187. CORRUPT replica mismatch between namenodes after failover. [Feb 24, 2020 4:28:00 PM] (github) HADOOP-16859: ABFS: Add unbuffer support to ABFS connector. [Feb 24, 2020 6:45:34 PM] (github) HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3 [Feb 24, 2020 8:45:49 PM] (snemeth) YARN-10157. FS-CS converter: initPropertyActions() is not called without [Feb 24, 2020 8:54:07 PM] (snemeth) YARN-10135. FS-CS converter tool: issue warning on dynamic auto-create [Feb 24, 2020 9:39:16 PM] (weichiu) HDFS-15174. Optimize ReplicaCachingGetSpaceUsed by reducing unnecessary [Feb 25, 2020 2:08:13 AM] (tasanuma) HADOOP-16841. The description of [Feb 25, 2020 4:47:52 AM] (github) YARN-10074. Update netty to 4.1.42Final in yarn-csi. Contributed by [Feb 25, 2020 8:30:04 PM] (snemeth) YARN-10130. FS-CS converter: Do not allow output dir to be the same as [Feb 25, 2020 8:48:16 PM] (snemeth) YARN-8767. TestStreamingStatus fails. Contributed by Andras Bokor [Feb 25, 2020 9:28:50 PM] (weichiu) HDFS-14861. Reset LowRedundancyBlocks Iterator periodically. Contributed -1 overall The following subsystems voted -1: asflicense findbugs pathlen shadedclient unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346] Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114] org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115] FindBugs : module:hadoop-cloud-storage-project/hadoop-cos Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66] org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87] Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178] org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not
Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/607/ [Feb 24, 2020 2:48:10 PM] (ayushsaxena) HDFS-15166. Remove redundant field fStream in ByteStringLog. Contributed [Feb 25, 2020 4:36:50 PM] (ericp) YARN-10140: TestTimelineAuthFilterForV2 fails due to login failures in [Feb 25, 2020 5:33:20 PM] (kihwal) Revert "HDFS-6874. Add GETFILEBLOCKLOCATIONS operation to HttpFS. [Feb 25, 2020 6:28:56 PM] (kihwal) HDFS-13404. Addendum: RBF: -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] Failed junit tests : hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.yarn.client.api.impl.TestAMRMClient hadoop.registry.secure.TestSecureLogins cc: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/607/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/607/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt [328K] cc: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/607/artifact/out/diff-compile-cc-root-jdk1.8.0_242.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/607/artifact/out/diff-compile-javac-root-jdk1.8.0_242.txt [308K] checkstyle: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/607/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/607/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/607/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/607/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/607/artifact/out/diff-patch-shellcheck.txt [56K] shelldocs: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/607/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/607/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/607/artifact/out/whitespace-tabs.txt [1.3M] xml: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/607/artifact/out/xml.txt [12K] findbugs: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/607/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/607/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt [16K] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/607/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_242.txt [1.1M] unit: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/607/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [232K] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/607/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [12K]