Re: [VOTE] Release Apache Hadoop Thirdparty 1.2.0 (RC1)
cc @PJ Fanning @Ayush Saxena @Steve Loughran @Takanobu Asanuma @Shuyan Zhang @inigo...@apache.org On Mon, Feb 5, 2024 at 11:30 AM Xiaoqiao He wrote: > +1(binding). > > I checked the following items: > - [X] Download links are valid. > - [X] Checksums and PGP signatures are valid. > - [X] LICENSE and NOTICE files are correct for the repository. > - [X] Source code artifacts have correct names matching the current > release. > - [X] All files have license headers if necessary. > - [X] Building is OK using `mvn clean install` on JDK_1.8.0_202. > - [X] Built Hadoop trunk successfully with updated thirdparty (include > update protobuf shaded path). > - [X] No difference between tag and release src tar. > > Good Luck! > > Best Regards, > - He Xiaoqiao > > > On Sun, Feb 4, 2024 at 10:29 PM slfan1989 wrote: > >> Hi folks, >> >> Xiaoqiao He and I have put together a release candidate (RC1) for Hadoop >> Thirdparty 1.2.0. >> >> The RC is available at: >> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-thirdparty-1.2.0-RC1 >> >> The RC tag is >> https://github.com/apache/hadoop-thirdparty/releases/tag/release-1.2.0-RC1 >> >> The maven artifacts are staged at >> https://repository.apache.org/content/repositories/orgapachehadoop-1401 >> >> Comparing to 1.1.1, there are three additional fixes: >> >> HADOOP-18197. Upgrade Protobuf-Java to 3.21.12 >> https://github.com/apache/hadoop-thirdparty/pull/26 >> >> HADOOP-18921. Upgrade to avro 1.11.3 >> https://github.com/apache/hadoop-thirdparty/pull/24 >> >> HADOOP-18843. Guava version 32.0.1 bump to fix CVE-2023-2976 (#23) >> https://github.com/apache/hadoop-thirdparty/pull/23 >> >> You can find my public key at : >> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS >> >> Best Regards, >> Shilun Fan. >> >>
Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/625/ [Feb 3, 2024, 11:20:04 AM] (github) HDFS-17369. Add uuid into datanode info for NameNodeMXBean (#6521) Contributed by Haiyang Hu. [Feb 3, 2024, 11:26:30 AM] (github) HDFS-17353. Fix failing RBF module tests. (#6491) Contributed by Alexander Bogdanov [Feb 3, 2024, 11:34:42 AM] (github) YARN-11362: Fix several typos in YARN codebase of misspelled resource (#6474) Contributed by EremenkoValentin. [Feb 3, 2024, 2:48:52 PM] (github) HADOOP-19049. Fix StatisticsDataReferenceCleaner classloader leak (#6488) -1 overall The following subsystems voted -1: blanks hadolint mvnsite pathlen spotbugs unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml spotbugs : module:hadoop-hdfs-project/hadoop-hdfs Redundant nullcheck of oldLock, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:[line 695] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long, FileInputStream, FileChannel, String) Redundant null check at MappableBlockLoader.java:[line 138] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at MemoryMappableBlockLoader.java:[line 75] Redundant nullcheck of blockChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long, FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null check at NativePmemMappableBlockLoader.java:[line 85] Redundant nullcheck of metaChannel, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion, long, FileInputStream, FileChannel, String) Redundant null check at NativePmemMappableBlockLoader.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion, long, FileInputStream, FileChannel, String) Redundant null check at NativePmemMappableBlockLoader.java:[line 130] org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts doesn't override java.util.ArrayList.equals(Object) At RollingWindowManager.java:At RollingWindowManager.java:[line 1] spotbugs : module:hadoop-yarn-project/hadoop-yarn Redundant nullcheck of it, which is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager
[jira] [Created] (HADOOP-19067) Allow tag passing to AWS Credential Provider
Jason Martin created HADOOP-19067: - Summary: Allow tag passing to AWS Credential Provider Key: HADOOP-19067 URL: https://issues.apache.org/jira/browse/HADOOP-19067 Project: Hadoop Common Issue Type: Improvement Components: fs/s3 Affects Versions: 3.3.6 Reporter: Jason Martin [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/AssumedRoleCredentialProvider.java#L131-L133] passes a session name and role arn to AssumeRoleRequest. The AWS AssumeRole API also supports passing a list of tags: [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/sts/model/AssumeRoleRequest.html#tags()] These tags could be used by platforms to enhance the data encoded into CloudTrail entries to provide better information about the client. For example, a 'notebook' based platform could encode the notebook / jobname / invoker-id in these tags, enabling more granular access controls and leaving a richer breadcrumb-trail as to what operations are being performed. This is particularly useful in larger environments where jobs do not get individual roles to assume, and there is a desire to track what jobs/notebooks are reading a given set of files in S3. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1493/ No changes -1 overall The following subsystems voted -1: blanks hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml Failed junit tests : hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem cc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1493/artifact/out/results-compile-cc-root.txt [96K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1493/artifact/out/results-compile-javac-root.txt [12K] blanks: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1493/artifact/out/blanks-eol.txt [15M] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1493/artifact/out/blanks-tabs.txt [2.0M] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1493/artifact/out/results-checkstyle-root.txt [13M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1493/artifact/out/results-hadolint.txt [24K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1493/artifact/out/results-pathlen.txt [16K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1493/artifact/out/results-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1493/artifact/out/results-shellcheck.txt [24K] xml: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1493/artifact/out/xml.txt [24K] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1493/artifact/out/results-javadoc-javadoc-root.txt [244K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1493/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt [440K] Powered by Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18993) Allow to not isolate S3AFileSystem classloader when needed
[ https://issues.apache.org/jira/browse/HADOOP-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-18993. - Fix Version/s: 3.5.0 Resolution: Fixed > Allow to not isolate S3AFileSystem classloader when needed > -- > > Key: HADOOP-18993 > URL: https://issues.apache.org/jira/browse/HADOOP-18993 > Project: Hadoop Common > Issue Type: Improvement > Components: hadoop-thirdparty >Affects Versions: 3.3.6 >Reporter: Antonio Murgia >Assignee: Antonio Murgia >Priority: Minor > Labels: pull-request-available > Fix For: 3.5.0 > > > In HADOOP-17372 the S3AFileSystem forces the configuration classloader to be > the same as the one that loaded S3AFileSystem. This leads to the > impossibility in Spark applications to load third party credentials providers > as user jars. > I propose to add a configuration key > {{fs.s3a.extensions.isolated.classloader}} with a default value of {{true}} > that if set to {{false}} will not perform the classloader set. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop Thirdparty 1.2.0 RC0
I agree with your idea. We can apply hadoop-thirdparty-1.2.0 in the 3.3.x version after its release to mitigate security issues. hadoop-thirdparty-1.2.0 will be used in hadoop-3.4.0. If hadoop-thirdparty-1.2.0 is released, we will incorporate it into hadoop-3.4.0-RC2. Apache Hadoop Thirdparty 1.2.0 RC1 is now being prepared for voting, and I look forward to your review and vote. Best Regards, Shilun Fan. Original From:"Steve Loughran"< ste...@cloudera.com.INVALID >; Date:2024/2/5 22:28 To:"PJ Fanning"< fannin...@apache.org >; CC:"common-dev"< common-dev@hadoop.apache.org >; Subject:Re: [VOTE] Release Apache Hadoop Thirdparty 1.2.0 RC0 I'd like to get a 3.3.x out with the release too, so as to end the emails we get to security@ listing everything someone's security scanner has found and demanding a timeline for a fix. Actually I should get back to the last such reporter and ask them to test the new RC and 3.4.x on the basis that they will be expected to upgrade, and now is the chance to identify any problems On Wed, 31 Jan 2024 at 20:13, PJ Fanning wrote: > +1 (non-binding) > > * I validated the checksum and signature on the src tgz > * LICENSE/NOTICE present > * ASF headers > * no unexpected binaries > * can build using mvn > * tested the thirdparty protobuf jar in hadoop main build > > Is the idea that there will be a Hadoop 3.4.0 RC2 that uses the thirdparty > jars after they are released? > > > On 2024/01/31 02:16:47 slfan1989 wrote: > > Thank you for the review and vote! Looking forward to other forks helping > > with voting and verification. > > > > Best Regards, > > Shilun Fan. > > > > On Tue, Jan 30, 2024 at 6:20 PM Xiaoqiao He > wrote: > > > > > Thanks Shilun for driving it and making it happen. > > > > > > +1(binding). > > > > > > [x] Checksums and PGP signatures are valid. > > > [x] LICENSE files exist. > > > [x] NOTICE is included. > > > [x] Rat check is ok. `mvn clean apache-rat:check` > > > [x] Built from source works well: `mvn clean install` > > > [x] Built Hadoop trunk with updated thirdparty successfully (include > update > > > protobuf shaded path). > > > > > > BTW, hadoop-thirdparty-1.2.0 will be included in release-3.4.0, hope we > > > could finish this vote before 2024/02/06(UTC) if there are no concerns. > > > Thanks all. > > > > > > Best Regards, > > > - He Xiaoqiao > > > > > > > > > > > > On Mon, Jan 29, 2024 at 10:42 PM slfan1989 > wrote: > > > > > > > Hi folks, > > > > > > > > Xiaoqiao He and I have put together a release candidate (RC0) for > Hadoop > > > > Thirdparty 1.2.0. > > > > > > > > The RC is available at: > > > > > > > > https://dist.apache.org/repos/dist/dev/hadoop/hadoop-thirdparty-1.2.0-RC0 > > > > > > > > The RC tag is > > > > > > > > https://github.com/apache/hadoop-thirdparty/releases/tag/release-1.2.0-RC0 > > > > > > > > The maven artifacts are staged at > > > > > https://repository.apache.org/content/repositories/orgapachehadoop-1398 > > > > > > > > Comparing to 1.1.1, there are three additional fixes: > > > > > > > > HADOOP-18197. Upgrade Protobuf-Java to 3.21.12 > > > > https://github.com/apache/hadoop-thirdparty/pull/26 > > > > > > > > HADOOP-18921. Upgrade to avro 1.11.3 > > > > https://github.com/apache/hadoop-thirdparty/pull/24 > > > > > > > > HADOOP-18843. Guava version 32.0.1 bump to fix CVE-2023-2976 > > > > https://github.com/apache/hadoop-thirdparty/pull/23 > > > > > > > > You can find my public key at : > > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > > > > > > > > Best Regards, > > > > Shilun Fan. > > > > > > > > > > > - > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > >
Re: [VOTE] Release Apache Hadoop Thirdparty 1.2.0 RC0
I'd like to get a 3.3.x out with the release too, so as to end the emails we get to security@ listing everything someone's security scanner has found and demanding a timeline for a fix. Actually I should get back to the last such reporter and ask them to test the new RC and 3.4.x on the basis that they will be expected to upgrade, and now is the chance to identify any problems On Wed, 31 Jan 2024 at 20:13, PJ Fanning wrote: > +1 (non-binding) > > * I validated the checksum and signature on the src tgz > * LICENSE/NOTICE present > * ASF headers > * no unexpected binaries > * can build using mvn > * tested the thirdparty protobuf jar in hadoop main build > > Is the idea that there will be a Hadoop 3.4.0 RC2 that uses the thirdparty > jars after they are released? > > > On 2024/01/31 02:16:47 slfan1989 wrote: > > Thank you for the review and vote! Looking forward to other forks helping > > with voting and verification. > > > > Best Regards, > > Shilun Fan. > > > > On Tue, Jan 30, 2024 at 6:20 PM Xiaoqiao He > wrote: > > > > > Thanks Shilun for driving it and making it happen. > > > > > > +1(binding). > > > > > > [x] Checksums and PGP signatures are valid. > > > [x] LICENSE files exist. > > > [x] NOTICE is included. > > > [x] Rat check is ok. `mvn clean apache-rat:check` > > > [x] Built from source works well: `mvn clean install` > > > [x] Built Hadoop trunk with updated thirdparty successfully (include > update > > > protobuf shaded path). > > > > > > BTW, hadoop-thirdparty-1.2.0 will be included in release-3.4.0, hope we > > > could finish this vote before 2024/02/06(UTC) if there are no concerns. > > > Thanks all. > > > > > > Best Regards, > > > - He Xiaoqiao > > > > > > > > > > > > On Mon, Jan 29, 2024 at 10:42 PM slfan1989 > wrote: > > > > > > > Hi folks, > > > > > > > > Xiaoqiao He and I have put together a release candidate (RC0) for > Hadoop > > > > Thirdparty 1.2.0. > > > > > > > > The RC is available at: > > > > > > > > https://dist.apache.org/repos/dist/dev/hadoop/hadoop-thirdparty-1.2.0-RC0 > > > > > > > > The RC tag is > > > > > > > > https://github.com/apache/hadoop-thirdparty/releases/tag/release-1.2.0-RC0 > > > > > > > > The maven artifacts are staged at > > > > > https://repository.apache.org/content/repositories/orgapachehadoop-1398 > > > > > > > > Comparing to 1.1.1, there are three additional fixes: > > > > > > > > HADOOP-18197. Upgrade Protobuf-Java to 3.21.12 > > > > https://github.com/apache/hadoop-thirdparty/pull/26 > > > > > > > > HADOOP-18921. Upgrade to avro 1.11.3 > > > > https://github.com/apache/hadoop-thirdparty/pull/24 > > > > > > > > HADOOP-18843. Guava version 32.0.1 bump to fix CVE-2023-2976 > > > > https://github.com/apache/hadoop-thirdparty/pull/23 > > > > > > > > You can find my public key at : > > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > > > > > > > > Best Regards, > > > > Shilun Fan. > > > > > > > > > > > - > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > >
Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1293/ No changes -1 overall The following subsystems voted -1: asflicense hadolint mvnsite pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.fs.TestFileUtil hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.TestLeaseRecovery2 hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain hadoop.hdfs.TestFileLengthOnClusterRestart hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.fs.viewfs.TestViewFileSystemHdfs hadoop.hdfs.server.federation.router.TestRouterQuota hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat hadoop.hdfs.server.federation.resolver.order.TestLocalResolver hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.mapreduce.lib.input.TestLineRecordReader hadoop.mapred.TestLineRecordReader hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter hadoop.resourceestimator.service.TestResourceEstimatorService hadoop.resourceestimator.solver.impl.TestLpSolver hadoop.yarn.sls.TestSLSRunner hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceAllocator hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceHandlerImpl hadoop.yarn.server.resourcemanager.TestClientRMService hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker cc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1293/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1293/artifact/out/diff-compile-javac-root.txt [488K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1293/artifact/out/diff-checkstyle-root.txt [14M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1293/artifact/out/diff-patch-hadolint.txt [4.0K] mvnsite: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1293/artifact/out/patch-mvnsite-root.txt [572K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1293/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1293/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1293/artifact/out/diff-patch-shellcheck.txt [72K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1293/artifact/out/whitespace-eol.txt [12M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1293/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1293/artifact/out/patch-javadoc-root.txt [36K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1293/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [220K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1293/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [1.8M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1293/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [36K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1293/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [16K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1293/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt [104K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1293/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt [20K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1293/artifact/out/patch-unit-hadoop-tools_hadoop-resourceestimator.txt [16K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1293/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [28K] https://ci-ha