Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/196/ [Jul 5, 2021 10:19:13 AM] (noreply) HADOOP-17250 Lot of short reads can be merged with readahead. (#3110) [Jul 5, 2021 8:07:12 PM] (noreply) HADOOP-17402. Add GCS config to the core-site (#2638) [Jul 6, 2021 1:11:03 AM] (noreply) HADOOP-17749. Remove lock contention in SelectorPool of SocketIOWithTimeout (#3080) [Jul 6, 2021 2:31:22 AM] (noreply) HDFS-16110. Remove unused method reportChecksumFailure in DFSClient (#3174) [Jul 6, 2021 5:56:52 PM] (Ayush Saxena) HDFS-16101. Remove unuse variable and IoException in ProvidedStorageMap. Contributed by lei w. [Jul 7, 2021 2:07:10 AM] (noreply) HADOOP-17775. Remove JavaScript package from Docker environment. (#3137) [Jul 7, 2021 3:38:15 AM] (noreply) MAPREDUCE-7351 - CleanupJob during handle of SIGTERM signal (#3176) -1 overall The following subsystems voted -1: blanks mvnsite pathlen spotbugs unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml spotbugs : module:hadoop-hdfs-project/hadoop-hdfs Redundant nullcheck of oldLock, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory) Redundant null check at DataStorage.java:[line 695] org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts doesn't override java.util.ArrayList.equals(Object) At RollingWindowManager.java:At RollingWindowManager.java:[line 1] spotbugs : module:hadoop-yarn-project/hadoop-yarn Redundant nullcheck of it, which is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.java:is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.java:[line 343] Redundant nullcheck of it, which is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.java:is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.java:[line 356] Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 333] spotbugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server Redundant nullcheck of it, which is known to be non-null in org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker, NMStateStoreService$LocalResourceTrackerState) Redundant null check at ResourceLocalizationService.java:is known to be non-null in
[jira] [Created] (YARN-10851) Tez session close does not interrupt yarn's async thread
Qihong Wu created YARN-10851: Summary: Tez session close does not interrupt yarn's async thread Key: YARN-10851 URL: https://issues.apache.org/jira/browse/YARN-10851 Project: Hadoop YARN Issue Type: Bug Affects Versions: 2.10.1, 2.8.5 Environment: On an HA cluster, where RM1 is not the active RM Yarn of version 2.8.5 and is configured with Tez Reporter: Qihong Wu Attachments: hive.log Hi, I want to ask for the expertise knowledge on the yarn behavior when handling `InterruptedIOException`. The issue occurs on a HA cluster, where RM1 is NOT the active RM. Therefore, if the yarn request made to RM1 failed, the RM failover should happen. However, if an interrupted exception is thrown when connecting to RM1, the thread should try to [bail out|https://dzone.com/articles/how-to-handle-the-interruptedexception] as soon as possible to [respect interrupt request|https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ExecutorService.html#shutdownNow--], rather than moving on to another RM. But I found my application (hive) after throwing `InterruptedIOException` when trying to connect with RM1 failed, continuing to RM2. I want to know how does yarn handle InterruptedIOException, shouldn't the async thread gets interrupted and shutdown when tez close() triggered interrupt request? *The reproduction step is:* 1. In an HA cluster which uses yarn of version 2.8.5 and is configured with Tez 2. Make sure RM1 is not the active RM by checking `yarn rmadmin -getAllServiceState`. It it is, manually [transition RM2 as active RM|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html#Admin_commands]. 3. Apply failover-retry properties to yarn-site.xml {quote} yarn.client.failover-retries 4 yarn.client.failover-retries-on-socket-timeouts 4 yarn.client.failover-max-attempts 4 {quote} 4. Run a simple application to yarn-client (for example, a simple hive DDL command) {quote}hive --hiveconf hive.root.logger=TRACE,console -e "create table tez_test (id int, name string);" {quote} 5. Find from application's log (for example, hive.log), you can find `RetryInvocationHandler` has captured the `InterruptedIOException` when request was talking over rm1, but the thread didn't bail out immediately, but continue moving to rm2. *More information:* The interrupted exception is triggered via via [TezSessionState#close|https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionState.java#L689] and [Future#cancel|https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Future.html#cancel-boolean-]. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/561/ [Jul 6, 2021 2:31:22 AM] (noreply) HDFS-16110. Remove unused method reportChecksumFailure in DFSClient (#3174) [Jul 6, 2021 5:56:52 PM] (Ayush Saxena) HDFS-16101. Remove unuse variable and IoException in ProvidedStorageMap. Contributed by lei w. -1 overall The following subsystems voted -1: blanks pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml Failed junit tests : hadoop.hdfs.TestDecommissionWithBackoffMonitor hadoop.hdfs.server.datanode.TestBlockScanner hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.hdfs.server.datanode.TestLargeBlockReport hadoop.hdfs.server.datanode.TestDataNodeECN hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy hadoop.yarn.csi.client.TestCsiClient hadoop.tools.dynamometer.TestDynamometerInfra hadoop.tools.dynamometer.TestDynamometerInfra cc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/561/artifact/out/results-compile-cc-root.txt [96K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/561/artifact/out/results-compile-javac-root.txt [376K] blanks: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/561/artifact/out/blanks-eol.txt [13M] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/561/artifact/out/blanks-tabs.txt [2.0M] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/561/artifact/out/results-checkstyle-root.txt [16M] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/561/artifact/out/results-pathlen.txt [16K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/561/artifact/out/results-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/561/artifact/out/results-shellcheck.txt [28K] xml: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/561/artifact/out/xml.txt [24K] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/561/artifact/out/results-javadoc-javadoc-root.txt [408K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/561/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [1.1M] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/561/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-csi.txt [20K] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/561/artifact/out/patch-unit-hadoop-tools_hadoop-dynamometer_hadoop-dynamometer-infra.txt [8.0K] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/561/artifact/out/patch-unit-hadoop-tools_hadoop-dynamometer.txt [24K] Powered by Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Created] (YARN-10850) TimelineService v2 lists containers for all attempts when filtering for one
Benjamin Teke created YARN-10850: Summary: TimelineService v2 lists containers for all attempts when filtering for one Key: YARN-10850 URL: https://issues.apache.org/jira/browse/YARN-10850 Project: Hadoop YARN Issue Type: Bug Components: timelinereader Reporter: Benjamin Teke When using the command {code:java} yarn container -list {code} with an application attempt ID based on the help only the containers for that attempt should be listed. {code:java} -list List containers for application attempt when application attempt ID is provided. When application name is provided, then it finds the instances of the application based on app's own implementation, and -appTypes option must be specified unless it is the default yarn-service type. With app name, it supports optional use of -version to filter instances based on app version, -components to filter instances based on component names, -states to filter instances based on instance state. {code} When TimelineService v2 is enabled all of the containers for the application are returned. {code:java} hrt_qa@ctr-e172-1620330694487-146061-01-02:/hwqe/hadoopqe$ yarn applicationattempt -list application_1625124233002_0007 21/07/01 09:32:23 INFO impl.TimelineReaderClientImpl: Initialized TimelineReader URI=http://ctr-e172-1620330694487-146061-01-04.hwx.site:8198/ws/v2/timeline/, clusterId=yarn-cluster 21/07/01 09:32:24 INFO client.AHSProxy: Connecting to Application History server at ctr-e172-1620330694487-146061-01-04.hwx.site/172.27.113.4:10200 21/07/01 09:32:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2 Total number of application attempts :2 ApplicationAttempt-Id State AM-Container-IdTracking-URL appattempt_1625124233002_0007_01 FAILED container_e43_1625124233002_0007_01_01 http://ctr-e172-1620330694487-146061-01-03.hwx.site:8088/proxy/application_1625124233002_0007/ appattempt_1625124233002_0007_02 KILLED container_e43_1625124233002_0007_02_01 http://ctr-e172-1620330694487-146061-01-03.hwx.site:8088/proxy/application_1625124233002_0007/ {code} Querying the 2 app attempts produces the same output: {code:java} hrt_qa@ctr-e172-1620330694487-146061-01-02:/hwqe/hadoopqe$ yarn container -list appattempt_1625124233002_0007_01 21/07/01 09:32:35 INFO impl.TimelineReaderClientImpl: Initialized TimelineReader URI=http://ctr-e172-1620330694487-146061-01-04.hwx.site:8198/ws/v2/timeline/, clusterId=yarn-cluster 21/07/01 09:32:35 INFO client.AHSProxy: Connecting to Application History server at ctr-e172-1620330694487-146061-01-04.hwx.site/172.27.113.4:10200 21/07/01 09:32:35 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2 21/07/01 09:32:36 INFO conf.Configuration: found resource resource-types.xml at file:/etc/hadoop/7.1.7.0-504/0/resource-types.xml Total number of containers :12 Container-IdStart Time Finish Time StateHost Node Http Address LOG-URL container_e43_1625124233002_0007_02_04 N/A N/ACOMPLETE ctr-e172-1620330694487-146061-01-02.hwx.site:25454 ctr-e172-1620330694487-146061-01-02.hwx.site:8042 http://ctr-e172-1620330694487-146061-01-04.hwx.site:19888/jobhistory/logs/logs/ctr-e172-1620330694487-146061-01-02.hwx.site:25454/container_e43_1625124233002_0007_02_04/container_e43_1625124233002_0007_02_04/hrt_qa container_e43_1625124233002_0007_02_05 N/A N/ACOMPLETE ctr-e172-1620330694487-146061-01-07.hwx.site:25454 ctr-e172-1620330694487-146061-01-07.hwx.site:8042 http://ctr-e172-1620330694487-146061-01-04.hwx.site:19888/jobhistory/logs/logs/ctr-e172-1620330694487-146061-01-07.hwx.site:25454/container_e43_1625124233002_0007_02_05/container_e43_1625124233002_0007_02_05/hrt_qa container_e43_1625124233002_0007_02_03
[jira] [Created] (YARN-10849) Clarify testcase documentation for TestServiceAM#testContainersReleasedWhenPreLaunchFails
Szilard Nemeth created YARN-10849: - Summary: Clarify testcase documentation for TestServiceAM#testContainersReleasedWhenPreLaunchFails Key: YARN-10849 URL: https://issues.apache.org/jira/browse/YARN-10849 Project: Hadoop YARN Issue Type: Improvement Reporter: Szilard Nemeth Assignee: Szilard Nemeth There's a small comment added to testcase: org.apache.hadoop.yarn.service.TestServiceAM#testContainersReleasedWhenPreLaunchFails: {code} // Test to verify that the containers are released and the // component instance is added to the pending queue when building the launch // context fails. {code} However, it was not clear for me why the "launch context" would fail. While the test passes, it throws an Exception that tells the story. {code} 2021-07-06 18:31:04,438 ERROR [pool-275-thread-1] containerlaunch.ContainerLaunchService (ContainerLaunchService.java:run(122)) - [COMPINSTANCE compa-0 : container_1625589063422_0001_01_01]: Failed to launch container. java.lang.IllegalArgumentException: Can not create a Path from a null string at org.apache.hadoop.fs.Path.checkPathArg(Path.java:164) at org.apache.hadoop.fs.Path.(Path.java:180) at org.apache.hadoop.yarn.service.provider.tarball.TarballProviderService.processArtifact(TarballProviderService.java:39) at org.apache.hadoop.yarn.service.provider.AbstractProviderService.buildContainerLaunchContext(AbstractProviderService.java:144) at org.apache.hadoop.yarn.service.containerlaunch.ContainerLaunchService$ContainerLauncher.run(ContainerLaunchService.java:107) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) {code} This exception is thrown because the id of the Artifact object is unset (null) and TarballProviderService.processArtifact verifies it and it does not allow such artifacts. The aim of this jira is to add a clarification comment or javadoc to this method. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/ No changes -1 overall The following subsystems voted -1: asflicense hadolint mvnsite pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.util.TestDiskCheckerWithDiskIo hadoop.fs.TestFileUtil hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.server.balancer.TestBalancer hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat hadoop.hdfs.server.federation.resolver.order.TestLocalResolver hadoop.hdfs.server.federation.router.TestRouterQuota hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker hadoop.yarn.server.resourcemanager.TestClientRMService hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter hadoop.tools.TestDistCpSystem hadoop.yarn.sls.TestSLSRunner hadoop.resourceestimator.service.TestResourceEstimatorService hadoop.resourceestimator.solver.impl.TestLpSolver cc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/artifact/out/diff-compile-javac-root.txt [496K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/artifact/out/diff-patch-hadolint.txt [4.0K] mvnsite: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/artifact/out/patch-mvnsite-root.txt [584K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/artifact/out/diff-patch-pylint.txt [48K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/artifact/out/diff-patch-shellcheck.txt [56K] shelldocs: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/artifact/out/diff-patch-shelldocs.txt [48K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/artifact/out/whitespace-eol.txt [12M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/artifact/out/patch-javadoc-root.txt [32K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [240K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [432K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [12K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [40K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [124K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt [96K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [104K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt [32K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/352/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt [20K] htt