Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-07-30 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/543/

[Jul 30, 2018 2:53:44 AM] (sammi.chen) HADOOP-15607. AliyunOSS: fix duplicated 
partNumber issue in
[Jul 30, 2018 9:18:04 AM] (sunilg) YARN-8591. [ATSv2] NPE while checking for 
entity acl in non-secure
[Jul 30, 2018 10:20:04 AM] (brahma) HDFS-12716. 
'dfs.datanode.failed.volumes.tolerated' to support minimum




-1 overall


The following subsystems voted -1:
compile mvninstall pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFileUtil 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestIPC 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestNativeCodeLoader 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestHAAppend 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots 
   hadoop.hdfs.server.namenode.snapshot.TestSnapRootDescendantDiff 
   hadoop.hdfs.server.namenode.TestCheckpoint 
   hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs 
   hadoop.hdfs.TestDFSShell 
   hadoop.hdfs.TestDFSStripedInputStream 
   hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy 
   hadoop.hdfs.TestDFSUpgradeFromImage 
   hadoop.hdfs.TestEncryptionZones 
   hadoop.hdfs.TestErasureCodingPolicies 
   hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy 
   hadoop.hdfs.TestExtendedAcls 
   hadoop.hdfs.TestFetchImage 
   hadoop.hdfs.TestHDFSFileSystemContract 
   hadoop.hdfs.TestMaintenanceState 
   hadoop.hdfs.TestPread 
   hadoop.hdfs.TestReadStripedFileWithDNFailure 
   hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.hdfs.TestSecureEncryptionZoneWithKMS 
   hadoop.hdfs.TestTrashWithSecureEncryptionZones 
   hadoop.hdfs.tools.TestDFSAdmin 
   hadoop.hdfs.web.TestWebHDFS 
   hadoop.hdfs.web.TestWebHdfsUrl 
   hadoop.fs.http.server.TestHttpFSServerWebServer 
   
hadoop.yarn.logaggregation.filecontroller.ifile.TestLogAggregationIndexFileController
 
   
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch 
   hadoop.yarn.server.nodemanager.containermanager.TestAuxServices 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestContainerExecutor 
   hadoop.yarn.server.nodemanager.TestNodeManagerResync 
   hadoop.yarn.server.webproxy.amfilter.TestAmFilter 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   
hadoop.yarn.server.timeline.security.TestTimelineAuthenticationFilterForV1 
   hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart 
   

Re: [DISCUSS] Tracing in the Hadoop ecosystem

2018-07-30 Thread Eric Yang
Most of code coverage tools can instrument java classes without make any 
source code changes, but tracing distributed system is more involved because 
code execution via network interactions are not easy to match up.  
All interactions between sender and receiver have some form of session id 
or sequence id.  Hadoop had some logic to assist the stitching of distributed
interactions together in clienttrace log.  This information seems to have been
lost in the last 5-6 years of Hadoop evolutions.  Htrace is invented to fill 
the void
left behind by clienttrace as a programmable API to send out useful tracing 
data for
downstream analytical program to visualize the interaction.

Large companies have common practice to enforce logging the session id, and 
write homebrew tools to stitch together debugging logic for a specific 
software.  
There are also growing set of tools from Splunk or similar companies to write 
analytical tools to stitch the views together.  Hadoop does not seem to be on 
top of the list for those company to implement the tracing because Hadoop 
networking layer is complex and changed more frequently than desired.  

If we go back to logging approach, instead of API approach, it will help 
someone to write the analytical program someday.  The danger of logging 
approach is that It is boring to write LOG.debug() everywhere, and we 
often forgot about it, and log entries are removed.

API approach can work, if real time interactive tracing can be done.  
However, this is hard to realize in Hadoop because massive amount of 
parallel data is difficult to aggregate at real time without hitting timeout.
It has a higher chance to require changes to network protocol that might cause 
more headache than it's worth.  I am in favor of removing Htrace support
and redo distributed tracing using logging approach.

Regards,
Eric

On 7/30/18, 3:06 PM, "Stack"  wrote:

There is a healthy discussion going on over in HADOOP-15566 on tracing
in the Hadoop ecosystem. It would sit better on a mailing list than in
comments up on JIRA so here's an attempt at porting the chat here.

Background/Context: Bits of Hadoop and HBase had Apache HTrace trace
points added. HTrace was formerly "incubating" at Apache but has since
been retired, moved to Apache Attic. HTrace and the efforts at
instrumenting Hadoop wilted for want of attention/resourcing. Our Todd
Lipcon noticed that the HTrace instrumentation can add friction on
some code paths so can actually be harmful even when disabled.  The
natural follow-on is that we should rip out tracings of a "dead"
project. This then beggars the question, should something replace it
and if so what? This is where HADOOP-15566 is at currently.

HTrace took two or three runs, led by various Heros, at building a
trace lib for Hadoop (first). It was trying to build the trace lib, a
store, and a visualizer. Always, it had a mechanism for dumping the
traces out to external systems for storage and viewing (e.g. Zipkin).
HTrace started when there was little else but the, you guessed it,
Google paper that described the Dapper system they had internally.
Since then, the world of tracing has come on in leaps and bounds with
healthy alternatives, communities, and even commercialization.

If interested, take a read over HADOOP-15566. Will try and encourage
participants to move the chat here.

Thanks,
St.Ack

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


Re: [DISCUSS] Tracing in the Hadoop ecosystem

2018-07-30 Thread Duo Zhang
Anyway, for HBase, we'd better align our trace library with Hadoop,
especially HDFS. A full trace from the hbase client down to the datanode
will be really helpful for debugging and monitoring.

2018-07-31 6:06 GMT+08:00 Stack :

> There is a healthy discussion going on over in HADOOP-15566 on tracing
> in the Hadoop ecosystem. It would sit better on a mailing list than in
> comments up on JIRA so here's an attempt at porting the chat here.
>
> Background/Context: Bits of Hadoop and HBase had Apache HTrace trace
> points added. HTrace was formerly "incubating" at Apache but has since
> been retired, moved to Apache Attic. HTrace and the efforts at
> instrumenting Hadoop wilted for want of attention/resourcing. Our Todd
> Lipcon noticed that the HTrace instrumentation can add friction on
> some code paths so can actually be harmful even when disabled.  The
> natural follow-on is that we should rip out tracings of a "dead"
> project. This then beggars the question, should something replace it
> and if so what? This is where HADOOP-15566 is at currently.
>
> HTrace took two or three runs, led by various Heros, at building a
> trace lib for Hadoop (first). It was trying to build the trace lib, a
> store, and a visualizer. Always, it had a mechanism for dumping the
> traces out to external systems for storage and viewing (e.g. Zipkin).
> HTrace started when there was little else but the, you guessed it,
> Google paper that described the Dapper system they had internally.
> Since then, the world of tracing has come on in leaps and bounds with
> healthy alternatives, communities, and even commercialization.
>
> If interested, take a read over HADOOP-15566. Will try and encourage
> participants to move the chat here.
>
> Thanks,
> St.Ack
>


[jira] [Created] (HADOOP-15643) Review/implement ABFS support for the extra fs ops which some apps (HBase) expects

2018-07-30 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15643:
---

 Summary: Review/implement ABFS support for the extra fs ops which 
some apps (HBase) expects
 Key: HADOOP-15643
 URL: https://issues.apache.org/jira/browse/HADOOP-15643
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: HADOOP-15407
Reporter: Steve Loughran


One troublespot with storage connectors is those apps which expect rarer APIs, 
e.g. Beam and  ByteBufferReadable ( BEAM-2790),  HBase and CanUnbuffer 
(HADOOP-14748). 

Review ABFS support with these, decide which to implement, and if not, make 
sure that the callers can handle that



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[DISCUSS] Tracing in the Hadoop ecosystem

2018-07-30 Thread Stack
There is a healthy discussion going on over in HADOOP-15566 on tracing
in the Hadoop ecosystem. It would sit better on a mailing list than in
comments up on JIRA so here's an attempt at porting the chat here.

Background/Context: Bits of Hadoop and HBase had Apache HTrace trace
points added. HTrace was formerly "incubating" at Apache but has since
been retired, moved to Apache Attic. HTrace and the efforts at
instrumenting Hadoop wilted for want of attention/resourcing. Our Todd
Lipcon noticed that the HTrace instrumentation can add friction on
some code paths so can actually be harmful even when disabled.  The
natural follow-on is that we should rip out tracings of a "dead"
project. This then beggars the question, should something replace it
and if so what? This is where HADOOP-15566 is at currently.

HTrace took two or three runs, led by various Heros, at building a
trace lib for Hadoop (first). It was trying to build the trace lib, a
store, and a visualizer. Always, it had a mechanism for dumping the
traces out to external systems for storage and viewing (e.g. Zipkin).
HTrace started when there was little else but the, you guessed it,
Google paper that described the Dapper system they had internally.
Since then, the world of tracing has come on in leaps and bounds with
healthy alternatives, communities, and even commercialization.

If interested, take a read over HADOOP-15566. Will try and encourage
participants to move the chat here.

Thanks,
St.Ack

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15642) Update to latest/recent version of aws-sdk

2018-07-30 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15642:
---

 Summary: Update to latest/recent version of aws-sdk
 Key: HADOOP-15642
 URL: https://issues.apache.org/jira/browse/HADOOP-15642
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build, fs/s3
Affects Versions: 3.2.0
Reporter: Steve Loughran
 Attachments: Screen Shot 2018-07-30 at 14.11.22.png

Move to a later version of the AWS SDK library for a different set of features 
and issues.

One thing which doesn't work on the one we ship with is the ability to create 
assumed role sessions >1h, as there's a check in the client lib for 
role-duration <= 3600 seconds. I'll assume more recent SDKs delegate duration 
checks to the far end.

see: [https://aws.amazon.com/about-aws/whats-new/2018/03/longer-role-sessions/]

* assuming later versions will extend assumed role life, docs will need 
changing, 
* Adding a test in HADOOP-15583 which expects an error message if you ask for a 
duration of 3h; this should act as the test to see what happens.
* think this time would be good to explicitly write down the SDK update process



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15641) Fix ozone docker-compose illegal character in hostname

2018-07-30 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HADOOP-15641:
---

 Summary: Fix ozone docker-compose illegal character in hostname
 Key: HADOOP-15641
 URL: https://issues.apache.org/jira/browse/HADOOP-15641
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This generated warnings  in GRPC/Ratis.

{code}
scm_1   | Jul 30, 2018 7:08:47 PM 
org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
scm_1   | WARNING: Failed to construct URI for proxy lookup, proceeding 
without proxy
scm_1   | java.net.URISyntaxException: Illegal character in hostname at 
index 13: https://ozone_datanode_1.ozone_default:9858
scm_1   |   at java.net.URI$Parser.fail(URI.java:2848)
scm_1   |   at java.net.URI$Parser.parseHostname(URI.java:3387)
scm_1   |   at java.net.URI$Parser.parseServer(URI.java:3236)
scm_1   |   at java.net.URI$Parser.parseAuthority(URI.java:3155)
scm_1   |   at java.net.URI$Parser.parseHierarchical(URI.java:3097)
scm_1   |   at java.net.URI$Parser.parse(URI.java:3053)
scm_1   |   at java.net.URI.(URI.java:673)
scm_1   |   at 
org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.detectProxy(ProxyDetectorImpl.java:128)
scm_1   |   at 
org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.proxyFor(ProxyDetectorImpl.java:118)
scm_1   |   at 
org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.startNewTransport(InternalSubchannel.java:207)
scm_1   |   at 
org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.obtainActiveTransport(InternalSubchannel.java:188)
scm_1   |   at 
org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$SubchannelImpl.requestConnection(ManagedChannelImpl.java:1130)
scm_1   |   at 
org.apache.ratis.shaded.io.grpc.PickFirstBalancerFactory$PickFirstBalancer.handleResolvedAddressGroups(PickFirstBalancerFactory.java:79)
scm_1   |   at 
org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl$1NamesResolved.run(ManagedChannelImpl.java:1032)
scm_1   |   at 
org.apache.ratis.shaded.io.grpc.internal.ChannelExecutor.drain(ChannelExecutor.java:73)
scm_1   |   at 
org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$LbHelperImpl.runSerialized(ManagedChannelImpl.java:1000)
scm_1   |   at 
org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl.onAddresses(ManagedChannelImpl.java:1044)
scm_1   |   at 
org.apache.ratis.shaded.io.grpc.internal.DnsNameResolver$1.run(DnsNameResolver.java:201)
scm_1   |   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
scm_1   |   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
scm_1   |   at java.lang.Thread.run(Thread.java:748)
scm_1   | 

{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-15637) LocalFs#listLocatedStatus does not filter out hidden .crc files

2018-07-30 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reopened HADOOP-15637:
-

[~xkrogen], [~vagarychen], could u help to check the comments from 
[~bibinchundatt] below. 

And updated fixed version to 3.1.2 given this don't exist in branch-3.1.1.

> LocalFs#listLocatedStatus does not filter out hidden .crc files
> ---
>
> Key: HADOOP-15637
> URL: https://issues.apache.org/jira/browse/HADOOP-15637
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Fix For: 3.2.0, 2.9.2, 2.8.5, 3.0.4, 3.1.2
>
> Attachments: HADOOP-15637.000.patch
>
>
> After HADOOP-7165, {{LocalFs#listLocatedStatus}} incorrectly returns the 
> hidden {{.crc}} files used to store checksum information. This is because 
> HADOOP-7165 implemented {{listLocatedStatus}} on {{FilterFs}}, so the default 
> implementation is no longer used, and {{FilterFs}} directly calls the raw FS 
> since {{listLocatedStatus}} is not defined in {{ChecksumFs}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Discussion on HTrace replacement

2018-07-30 Thread Wei-Chiu Chuang
Greetings,

As you have probably heard of, HTrace is no more. We as a community need to
find a solution to replace HTrace. There's currently a great discussion on
a good replacement (HADOOP-15566
) and I urge you to
participate in the discussion if you aren't aware of this discussion yet.

For a bit of history/contenxt: HTrace is a distributed tracing system
specifically designed for Hadoop ecosystem projects, and its tracing coding
is embedded in Hadoop and HBase. Over the years, its activity diminished
while other distributed tracing technologies are getting more adoption. As
a Hadoop developer/supporter, the lack of ability to diagnose performance
issues in a large cluster has been always been a major supportability gap
for us, and it's getting worse as users onboard more variety of workloads.

I believe you feel the pain as well. Let's settle on a plan and add this
critically important feature in a better state.

Best
-- 
A very happy Hadoop contributor


[jira] [Resolved] (HADOOP-15640) Modify WebApps.Builder#at to parse IPv6 address

2018-07-30 Thread Sen Zhao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sen Zhao resolved HADOOP-15640.
---
Resolution: Invalid

> Modify WebApps.Builder#at to parse IPv6 address
> --
>
> Key: HADOOP-15640
> URL: https://issues.apache.org/jira/browse/HADOOP-15640
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sen Zhao
>Priority: Minor
>  Labels: IPv6
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15640) Modify WebApps.Builder#at to parse IPv6 address

2018-07-30 Thread Sen Zhao (JIRA)
Sen Zhao created HADOOP-15640:
-

 Summary: Modify WebApps.Builder#at to parse IPv6 address
 Key: HADOOP-15640
 URL: https://issues.apache.org/jira/browse/HADOOP-15640
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Sen Zhao
Assignee: Sen Zhao






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-07-30 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/853/

No changes




-1 overall


The following subsystems voted -1:
docker


Powered by Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-07-30 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.util.TestDiskChecker 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes 
   
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.policies.TestDominantResourceFairnessPolicy
 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestMRTimelineEventHandling 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/diff-checkstyle-root.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [28K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [4.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   CTEST:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
  [116K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [188K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [336K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/852/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
  [112K]