Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2024-01-01 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/607/

No changes




-1 overall


The following subsystems voted -1:
blanks hadolint mvnsite pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:[line 695] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:[line 138] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:[line 75] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:[line 85] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:[line 130] 
   Dead store to sharedDirs in 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration, boolean, 
boolean) At 
NameNode.java:org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration,
 boolean, boolean) At NameNode.java:[line 1383] 
   
org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts
 doesn't override java.util.ArrayList.equals(Object) At 
RollingWindowManager.java:At RollingWindowManager.java:[line 1] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   
org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT 
isn't final but should be At TimelineConnector.java:be At 
TimelineConnector.java:[line 82] 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerRe

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2024-01-01 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1458/

No changes




-1 overall


The following subsystems voted -1:
blanks hadolint pathlen spotbugs xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Dead store to sharedDirs in 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration, boolean, 
boolean) At 
NameNode.java:org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration,
 boolean, boolean) At NameNode.java:[line 1383] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   
org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT 
isn't final but should be At TimelineConnector.java:be At 
TimelineConnector.java:[line 82] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
   
org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT 
isn't final but should be At TimelineConnector.java:be At 
TimelineConnector.java:[line 82] 

spotbugs :

   module:hadoop-hdfs-project 
   Dead store to sharedDirs in 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration, boolean, 
boolean) At 
NameNode.java:org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration,
 boolean, boolean) At NameNode.java:[line 1383] 

spotbugs :

   module:hadoop-yarn-project 
   
org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT 
isn't final but should be At TimelineConnector.java:be At 
TimelineConnector.java:[line 82] 

spotbugs :

   module:root 
   Dead store to sharedDirs in 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration, boolean, 
boolean) At 
NameNode.java:org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration,
 boolean, boolean) At NameNode.java:[line 1383] 
   
org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT 
isn't final but should be At TimelineConnector.java:be At 
TimelineConnector.java:[line 82] 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1458/artifact/out/results-compile-cc-root.txt
 [96K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1458/artifact/out/results-compile-javac-root.txt
 [12K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1458/artifact/out/blanks-eol.txt
 [15M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1458/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1458/artifact/out/results-checkstyle-root.txt
 [13M]

   hadolint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1458/artifact/out/results-hadolint.txt
 [24K]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1458/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1458/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1458/artifact/out/results-shellcheck.txt
 [24K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1458/artifact/out/xml.txt
 [24K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1458/artifact/out/results-javadoc-javadoc-root.txt
 [244K]

   spotbugs:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1458/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 [8.0K]
  
h

[jira] [Resolved] (HADOOP-17912) ABFS: Support for Encryption Context

2024-01-01 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17912.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> ABFS: Support for Encryption Context
> 
>
> Key: HADOOP-17912
> URL: https://issues.apache.org/jira/browse/HADOOP-17912
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Sumangala Patki
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Support for customer-provided encryption keys at the file level, superceding 
> the global (account-level) key use in HADOOP-17536.
> ABFS driver will support an "EncryptionContext" plugin for retrieving 
> encryption information, the implementation for which should be provided by 
> the client. The keys/context retrieved will be sent via request headers to 
> the server, which will store the encryption context. Subsequent REST calls to 
> server that access data/user metadata of the file will require fetching the 
> encryption context through a GetFileProperties call and retrieving the key 
> from the custom provider, before sending the request.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18540) Upgrade Bouncy Castle to 1.70

2024-01-01 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18540.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> Upgrade Bouncy Castle to 1.70
> -
>
> Key: HADOOP-18540
> URL: https://issues.apache.org/jira/browse/HADOOP-18540
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: D M Murali Krishna Reddy
>Assignee: D M Murali Krishna Reddy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Upgrade Bouncycastle to 1.70 to resolve
>  
> |[[sonatype-2021-4916] CWE-327: Use of a Broken or Risky Cryptographic 
> Algorithm|https://ossindex.sonatype.org/vulnerability/sonatype-2021-4916?component-type=maven&component-name=org.bouncycastle/bcprov-jdk15on]|
> |[[sonatype-2019-0673] CWE-400: Uncontrolled Resource Consumption ('Resource 
> Exhaustion')|https://ossindex.sonatype.org/vulnerability/sonatype-2019-0673?component-type=maven&component-name=org.bouncycastle/bcprov-jdk15on]|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS][HDFS] Add rust binding for libhdfs

2024-01-01 Thread Steve Loughran
i hear owen o'malley has been learning rust, and as he left cloudera a year
ago, he'll be missing github and JIRA

On Thu, 21 Dec 2023 at 15:00, Ayush Saxena  wrote:

> It looks pretty challenging to me. Most of the committers aren't
> technically equipped to review this code, so getting the initial code
> reviewed & merged itself would be a challenge, as none of us can
> actually review the code.
>
> Looking at the repo, it has only 1 or 2 major contributors, which
> itself is a red flag, the bus factor is pretty low, if we don't find
> volunteers in future, we would be stuck with some dead code, which
> most of us don't know how to fix or maintain. If there is any CVE
> reported from this code post release, that would be a challenge for us
> to fix
>
> Quoting:
> > the Rust
> community has developed around 10 different HDFS client projects.
> However, almost all of them
> are no longer maintained.
>
> If they couldn't do, how we will be able to do that? and this isn't a
> very good statistic to quote :-)
>
>
> Well, I don't have objections on having this as a separate repo in
> Hadoop, if others are fine with it, I can try to help whatever is in
> my capacity, but I still have doubts on how easy would it be to push
> code or get votes on release of this project, which most of the people
> doesn't have knowledge & developing a community and stuff seems like a
> incubator thing to me.
>
> -Ayush
>
> On Thu, 21 Dec 2023 at 19:01, Xuanwo  wrote:
> >
> > Thanks Xiaoqiao He!
> >
> > Let me provide more context about this project.
> >
> > libhdfs-rust aims to provide native HDFS client support for Rust, a
> rapidly growing systems
> > programming language commonly used in modern infrastructure such as
> databases. With
> > libhdfs-rust, Rust developers can more easily integrate with HDFS.
> libhdfs-rust is analogous
> > to both libhdfs (C API) and libhdfspp <
> https://github.com/apache/hadoop/tree/trunk/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp>
> (C++ API). Its current codebase builds upon libhdfs, but
> > there are plans to rewrite it entirely in pure rust. Consequently,
> libhdfs-rust will interface
> > directly with the HDFS Java client via JNI, making it fully parallel to
> both libhdfs and libhdfs-cpp.
> >
> > There are three possible ways for us to take:
> >
> > We have three options to consider:
> >
> > A: Integrate libhdfs-rust into the Hadoop repository, placing it under
> > 'hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native'.
> > B: Accept libhdfs-rust as a subproject and establish a new repository
> > named 'hadoop-hdfs-rust-client' (or another suitable name).
> > C: Maintain libhdfs-rust as an independent project outside of Hadoop.
> >
> > I personally prefer Option B since:
> >
> > For Option A
> >
> > The release process for Hadoop is already quite complex. We should avoid
> placing additional
> > burdens on the Release Managers, especially when it involves integrating
> a new language.
> >
> > And it's impossible to wait for libhdfs-rust mature and stable enough to
> catch up the release train.
> >
> > For Option C
> >
> > libhdfs-rust is exactly the same with libhdfs & libhdfspp <
> https://github.com/apache/hadoop/tree/trunk/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp>
> but for rust. Building a community for
> > libhdfs-rust outside of Hadoop is challenging. In fact, numerous
> attempts have been made: the Rust
> > community has developed around 10 different HDFS client projects.
> However, almost all of them
> > are no longer maintained.
> >
> > In conclusion, I believe that Option B is the best choice for us: we can
> develop a rust project in hadoop
> > community, attract more rust users, and recruit additional committers
> from the rust community.
> >
> >
> > On Wed, Dec 20, 2023, at 21:53, Xiaoqiao He wrote:
> > > Thanks Xuanwo for your work. I believe it is valuable to enlarge
> hadoop ecosystem.
> > >
> > > I am also concerned that it will involve more hard work to release and
> version match,
> > > especially for one who is not familiar with C or Rust.
> > > Moreover, I am not aware the difference between `accept hdfs-sys as
> part of hadoop
> > > project` and `one separate project`.
> > >
> > > I think one smooth solution is reference hadoop-thirdparty[1] which is
> one hadoop
> > > sub-project but split to separate repo and release line etc, if it is
> accepted.
> > >
> > > cc @Ayush Saxena  @Wei-Chiu Chuang  weic...@apache.org> @Iñigo Goiri  @Shilun Fan
>  and other folks, what
> > > do you think? Thanks.
> > >
> > > Best Regards,
> > > - He Xiaoqiao
> > >
> > > [1] https://github.com/apache/hadoop-thirdparty
> > >
> > > On Wed, Dec 20, 2023 at 6:17 PM Xuanwo  wrote:
> > >> I'm fine to start work under a new repo, and I'm willing to help
> maintain this repo. The repo could name after hadoop-libhdfs-rust or just
> libh

Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2024-01-01 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1258/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.TestLeaseRecovery2 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.fs.viewfs.TestViewFileSystemHdfs 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.mapred.TestLineRecordReader 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
   hadoop.yarn.sls.TestSLSRunner 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceAllocator
 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceHandlerImpl
 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1258/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1258/artifact/out/diff-compile-javac-root.txt
  [488K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1258/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1258/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1258/artifact/out/patch-mvnsite-root.txt
  [572K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1258/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1258/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1258/artifact/out/diff-patch-shellcheck.txt
  [72K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1258/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1258/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1258/artifact/out/patch-javadoc-root.txt
  [36K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1258/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [220K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1258/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [1.8M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1258/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1258/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1258/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1258/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1258/artifact/out/patch-unit-hadoop-tools_hadoop-resourceestimator.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1258/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt
  [32K]
   
https://ci-hadoop.a