Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-03-11 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1436/

[Mar 11, 2020 8:39:10 PM] (github) Hadoop 16890. Change in expiry calculation 
for MSI token provider.

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org

Re: [DISCUSS] Accelerate Hadoop dependency updates

2020-03-11 Thread Wei-Chiu Chuang
FWIW we are updating guava in Spark and Hive at Cloudera. Don't know which
Apache version are they going to land, but we'll upstream them for sure.

The guava change is debatable. It's not as critical as others. There are
critical vulnerabilities in other dependencies that we have no way but to
update to a new major/minor version because we are so far behind. And given
the critical nature, I think it is worth the risk and backport to lower
maintenance releases is warranted. Moreover, our minor releases are at best
1 per year. That is too slow to respond to a critical vulnerability.

On Wed, Mar 11, 2020 at 5:02 PM Igor Dvorzhak 
wrote:

> Generally I'm for updating dependencies, but I think that Hadoop should
> stick with semantic versioning and do not make major and
> minor dependency updates in subminor releases.
>
> For  example, Hadoop 3.2.1 updated Guava to 27.0-jre, and because of this
> Spark 3.0 stuck with Hadoop 3.2.0 - they use Hive 2.3.6 that doesn't
> support Guava 27.0-jre.
>
> It would be better to make dependency upgrades when releasing new
> major/minor versions, for example Guava 27.0-jre upgrade was more
> appropriate for Hadoop 3.3.0 release than 3.2.1.
>
> On Tue, Mar 10, 2020 at 3:03 PM Wei-Chiu Chuang
>  wrote:
>
>> I'm not hearing any feedback so far, but I want to suggest:
>>
>> use hadoop-thirdparty repository to host any dependencies that are known
>> to
>> break compatibility.
>>
>> Candidate #1 guava
>> Candidate #2 Netty
>> Candidate #3 Jetty
>>
>> in fact, HBase shades these dependencies for the exact same reason.
>>
>> As an example of the cost of compatibility breakage: we spent the last 6
>> months to backport the guava update change (guava 11 --> 27) throughout
>> Cloudera's stack, and after 6 months we are not done yet because we have
>> to
>> update guava in Hadoop, Hive, Spark ..., and Hadoop, Hive and Spark's
>> guava
>> is in the classpath of every application.
>>
>> Thoughts?
>>
>> On Sat, Mar 7, 2020 at 9:31 AM Wei-Chiu Chuang 
>> wrote:
>>
>> > Hi Hadoop devs,
>> >
>> > I the past, Hadoop tends to be pretty far behind the latest versions of
>> > dependencies. Part of that is due to the fear of the breaking changes
>> > brought in by the dependency updates.
>> >
>> > However, things have changed dramatically over the past few years. With
>> > more focus on security vulnerabilities, more vulnerabilities are
>> discovered
>> > in our dependencies, and users put more pressure on patching Hadoop (and
>> > its ecosystem) to use the latest dependency versions.
>> >
>> > As an example, Jackson-databind had 20 CVEs published in the last year
>> > alone.
>> >
>> https://www.cvedetails.com/product/42991/Fasterxml-Jackson-databind.html?vendor_id=15866
>> >
>> > Jetty: 4 CVEs in 2019:
>> >
>> https://www.cvedetails.com/product/34824/Eclipse-Jetty.html?vendor_id=10410
>> >
>> > We can no longer keep Hadoop stay behind. The more we stay behind, the
>> > harder it is to update. A good example is Jersey migration 1 -> 2
>> > HADOOP-15984 
>> contributed
>> > by Akira. Jersey 1 is no longer supported. But Jersey 2 migration is
>> hard.
>> > If any critical vulnerability is found in Jersey 1, it will leave us in
>> a
>> > bad situation since we can't simply update Jersey version and be done.
>> >
>> > Hadoop 3 adds new public artifacts that shade these dependencies. We
>> > should advocate downstream applications to use the public artifacts to
>> > avoid breakage.
>> >
>> > I'd like to hear your thoughts: are you okay to see Hadoop keep up with
>> > the latest dependency updates, or would rather stay behind to ensure
>> > compatibility?
>> >
>> > Coupled with that, I'd like to call for more frequent Hadoop releases
>> for
>> > the same purpose. IMHO that'll require better infrastructure to assist
>> the
>> > release work and some rethinking our current Hadoop code structure, like
>> > separate each subproject into its own repository and release cadence.
>> This
>> > can be controversial but I think it'll be good for the project in the
>> long
>> > run.
>> >
>> > Thanks,
>> > Wei-Chiu
>> >
>>
>


Re: [DISCUSS] Accelerate Hadoop dependency updates

2020-03-11 Thread Igor Dvorzhak
Generally I'm for updating dependencies, but I think that Hadoop should
stick with semantic versioning and do not make major and
minor dependency updates in subminor releases.

For  example, Hadoop 3.2.1 updated Guava to 27.0-jre, and because of this
Spark 3.0 stuck with Hadoop 3.2.0 - they use Hive 2.3.6 that doesn't
support Guava 27.0-jre.

It would be better to make dependency upgrades when releasing new
major/minor versions, for example Guava 27.0-jre upgrade was more
appropriate for Hadoop 3.3.0 release than 3.2.1.

On Tue, Mar 10, 2020 at 3:03 PM Wei-Chiu Chuang
 wrote:

> I'm not hearing any feedback so far, but I want to suggest:
>
> use hadoop-thirdparty repository to host any dependencies that are known to
> break compatibility.
>
> Candidate #1 guava
> Candidate #2 Netty
> Candidate #3 Jetty
>
> in fact, HBase shades these dependencies for the exact same reason.
>
> As an example of the cost of compatibility breakage: we spent the last 6
> months to backport the guava update change (guava 11 --> 27) throughout
> Cloudera's stack, and after 6 months we are not done yet because we have to
> update guava in Hadoop, Hive, Spark ..., and Hadoop, Hive and Spark's guava
> is in the classpath of every application.
>
> Thoughts?
>
> On Sat, Mar 7, 2020 at 9:31 AM Wei-Chiu Chuang  wrote:
>
> > Hi Hadoop devs,
> >
> > I the past, Hadoop tends to be pretty far behind the latest versions of
> > dependencies. Part of that is due to the fear of the breaking changes
> > brought in by the dependency updates.
> >
> > However, things have changed dramatically over the past few years. With
> > more focus on security vulnerabilities, more vulnerabilities are
> discovered
> > in our dependencies, and users put more pressure on patching Hadoop (and
> > its ecosystem) to use the latest dependency versions.
> >
> > As an example, Jackson-databind had 20 CVEs published in the last year
> > alone.
> >
> https://www.cvedetails.com/product/42991/Fasterxml-Jackson-databind.html?vendor_id=15866
> >
> > Jetty: 4 CVEs in 2019:
> >
> https://www.cvedetails.com/product/34824/Eclipse-Jetty.html?vendor_id=10410
> >
> > We can no longer keep Hadoop stay behind. The more we stay behind, the
> > harder it is to update. A good example is Jersey migration 1 -> 2
> > HADOOP-15984 
> contributed
> > by Akira. Jersey 1 is no longer supported. But Jersey 2 migration is
> hard.
> > If any critical vulnerability is found in Jersey 1, it will leave us in a
> > bad situation since we can't simply update Jersey version and be done.
> >
> > Hadoop 3 adds new public artifacts that shade these dependencies. We
> > should advocate downstream applications to use the public artifacts to
> > avoid breakage.
> >
> > I'd like to hear your thoughts: are you okay to see Hadoop keep up with
> > the latest dependency updates, or would rather stay behind to ensure
> > compatibility?
> >
> > Coupled with that, I'd like to call for more frequent Hadoop releases for
> > the same purpose. IMHO that'll require better infrastructure to assist
> the
> > release work and some rethinking our current Hadoop code structure, like
> > separate each subproject into its own repository and release cadence.
> This
> > can be controversial but I think it'll be good for the project in the
> long
> > run.
> >
> > Thanks,
> > Wei-Chiu
> >
>


smime.p7s
Description: S/MIME Cryptographic Signature


[VOTE] Release Apache Hadoop Thirdparty 1.0.0 - RC1

2020-03-11 Thread Vinayakumar B
Hi folks,

Thanks to everyone's help on this release.

I have re-created a release candidate (RC1) for Apache Hadoop Thirdparty
1.0.0.

RC Release artifacts are available at :
  http://home.apache.org/~vinayakumarb/release/hadoop-thirdparty-1.0.0-RC1/

Maven artifacts are available in staging repo:
https://repository.apache.org/content/repositories/orgapachehadoop-1261/

The RC tag in git is here:
https://github.com/apache/hadoop-thirdparty/tree/release-1.0.0-RC1

And my public key is at:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

*This vote will run for 5 days, ending on March 18th 2020 at 11:59 pm IST.*

For the testing, I have verified Hadoop trunk compilation with
   "-DdistMgmtSnapshotsUrl=
https://repository.apache.org/content/repositories/orgapachehadoop-1261/
 -Dhadoop-thirdparty-protobuf.version=1.0.0"

My +1 to start.

-Vinay


Re: [VOTE] Release Apache Hadoop Thirdparty 1.0.0

2020-03-11 Thread Vinayakumar B
Hello everyone.

RC0 has been canceled.

Since issues mentioned above are already fixed now, soon I will create RC1
and re-create VOTE thread.

Thanks for trying out RC,
-Vinay


On Tue, Mar 3, 2020 at 11:41 AM Ayush Saxena  wrote:

> Hi Vinay
> Thanx for driving the release.
> Verified checksums and tried building from source.
> Everything seems to be working fine.
> But I feel the concerns regarding licences are valid.
> IMO we should fix them and include HADOOP-16895 too in the release
>
> -Ayush
>
> > On 29-Feb-2020, at 1:45 AM, Vinayakumar B 
> wrote:
> >
> > https://issues.apache.org/jira/browse/HADOOP-16895 jira created for
> > handling LICENCE and NOTICEs
> > PR also has been raised for a proposal. Please validate
> > https://github.com/apache/hadoop-thirdparty/pull/6
> >
> > -Vinay
> >
> >
> >> On Fri, Feb 28, 2020 at 11:48 PM Vinayakumar B  >
> >> wrote:
> >>
> >> Thanks Elek for detailed verification.
> >>
> >> Please find inline replies.
> >>
> >> -Vinay
> >>
> >>
> >>> On Fri, Feb 28, 2020 at 7:49 PM Elek, Marton  wrote:
> >>>
> >>>
> >>> Thank you very much to work on this release Vinay, 1.0.0 is always a
> >>> hard work...
> >>>
> >>>
> >>> 1. I downloaded it and I can build it from the source
> >>>
> >>> 2. Checked the signature and the sha512 of the src package and they are
> >>> fine
> >>>
> >>> 3. Yetus seems to be included in the source package. I am not sure if
> >>> it's intentional but I would remove the patchprocess directory from the
> >>> tar file.
> >>>
> >>> Since dev-support/create-release script and assembly file is copied
> from
> >> hadoop-repo,  I can find this issue exits in hadoop source release
> packages
> >> as well. ex: I checked 3.1.2 and 2.10 src packages.
> >> I will raise a Jira and fix this for both hadoop and thirdparty.
> >>
> >> 4. NOTICE.txt seems to be outdated (I am not sure, but I think the
> >>> Export Notice is unnecessary, especially for the source release, also
> >>> the note about the bouncycastle and Yarn server is unnecessary).
> >>>
> >>> Again, NOTICE.txt was copied from Hadoop and kept as is. I will create
> a
> >> jira to decide about NOTICE and LICENSEs
> >>
> >> 5. NOTICE-binary and LICENSE-binary seems to be unused (and they contain
> >>> unrelated entries, especially the NOTICE). IMHO
> >>>
> >>> We can decide in the Jira whether NOTICE-binary and LICENSE-binary to
> be
> >> used or not.
> >>
> >> 6. As far as I understand the binary release in this case is the maven
> >>> artifact. IANAL but the original protobuf license seems to be missing
> >>> from "unzip -p hadoop-shaded-protobuf_3_7-1.0.0.jar
> META-INF/LICENSE.txt"
> >>>
> >>
> >> I observed that there is one more file "META-INF/DEPENDENCIES" generated
> >> by shade plugin, which have reference to shaded artifacts and poniting
> to
> >> link of the original artifact LICENSE. I think this should be sufficient
> >> about protobuf's original license.
> >> IMO, "META-INF/LICENSE.txt" should point to current project's LICENSE,
> >> which in-turn can have contents/pointers of dependents' licenses.
> Siimilar
> >> approach followed in hadoop-shaded-client jars.
> >>
> >> hadoop's artifacts also will be uploaded to maven repo during release,
> >> which doesnot carry all LICENSE files in artifacts. It just says "See
> >> licenses/ for text of these licenses" which doesnot exist in artifact.
> May
> >> be we need to fix this too.
> >>
> >> 7. Minor nit: I would suggest to use only the filename in the sha512
> >>> files (instead of having the /build/source/target prefix). It would
> help
> >>> to use `sha512 -c` command to validate the checksum.
> >>>
> >>>
> >> Again, this is from create-release  script. will update the script.
> >>
> >> Thanks again to work on this,
> >>> Marton
> >>>
> >>> ps: I am not experienced with licensing enough to judge which one of
> >>> these are blocking and I might be wrong.
> >>>
> >>>
> >> IMO, none of these should be blocking and can be handled before next
> >> release. Still if someone feels this should be fixed and RC should be
> cut
> >> again, I am open to it.
> >>
> >> Thanks.
> >>
> >>>
> >>>
> >>> On 2/25/20 8:17 PM, Vinayakumar B wrote:
>  Hi folks,
> 
>  Thanks to everyone's help on this release.
> 
>  I have created a release candidate (RC0) for Apache Hadoop Thirdparty
> >>> 1.0.0.
> 
>  RC Release artifacts are available at :
> 
> >>>
> http://home.apache.org/~vinayakumarb/release/hadoop-thirdparty-1.0.0-RC0/
> 
>  Maven artifacts are available in staging repo:
> 
> >>>
> https://repository.apache.org/content/repositories/orgapachehadoop-1258/
> 
>  The RC tag in git is here:
>  https://github.com/apache/hadoop-thirdparty/tree/release-1.0.0-RC0
> 
>  And my public key is at:
>  https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> 
>  *This vote will run for 5 days, ending on March 1st 2020 at 11:59 pm
> >>> IST.*
> 
>  For the testing, I have verified Had

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-03-11 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1435/

[Mar 10, 2020 2:17:45 PM] (ericp) YARN-942.
[Mar 10, 2020 3:07:46 PM] (snemeth) YARN-10168. FS-CS Converter: tool doesn't 
handle min/max resource
[Mar 10, 2020 3:35:04 PM] (snemeth) YARN-10002. Code cleanup and improvements 
in ConfigurationStoreBaseTest.
[Mar 10, 2020 3:44:48 PM] (snemeth) YARN-9354. Resources should be created with 
ResourceTypesTestHelper




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed CTEST tests :

   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.TestFileCreation 
   hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.server.namenode.TestDecommissioningStatus 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1435/artifact/out/diff-compile-cc-root.txt
  [8.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1435/artifact/out/diff-compile-javac-root.txt
  [424K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1435/artifact/out/diff-checkstyle-root.txt
  [16M]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1435/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1435/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1435/artifact/out/diff-patch-shellcheck.txt
  [16K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1435/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1435/artifact/out/whitespace-eol.txt
  [9.9M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1435/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
h

Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-03-11 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/621/

[Mar 10, 2020 3:48:49 PM] (ericp) YARN-942.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/621/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/621/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/621/artifact/out/diff-compile-cc-root-jdk1.8.0_242.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/621/artifact/out/diff-compile-javac-root-jdk1.8.0_242.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/621/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/621/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/621/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/621/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/621/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/621/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/621/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/621/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/621/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/621/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/621/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/621/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_242.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/621/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [156K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/621/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [232K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/621/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/621/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt
  [12K]
   
https://builds.apa