Re: [VOTE] Merging branch HDFS-7240 to trunk

2018-02-27 Thread Andrew Wang
*Hi Jitendra and all,Thanks for putting this together. I caught up on the
discussion on JIRA and document at HDFS-10419, and still have the same
concerns raised earlier

about merging the Ozone branch to trunk.To recap these questions/concerns
at a very high level:* Wouldn't Ozone benefit from being a separate
project?* Why should it be merged now?I still believe that both Ozone and
Hadoop would benefit from Ozone being a separate project, and that there is
no pressing reason to merge Ozone/HDSL now.The primary reason I've heard
for merging is that the Ozone is that it's at a stage where it's ready for
user feedback. Second, that it needs to be merged to start on the NN
refactoring for HDFS-on-HDSL.First, without HDFS-on-HDSL support, users are
testing against the Ozone object storage interface. Ozone and HDSL
themselves are implemented as separate masters and new functionality bolted
onto the datanode. It also doesn't look like HDFS in terms of API or
featureset; yes, it speaks FileSystem, but so do many out-of-tree storage
systems like S3, Ceph, Swift, ADLS etc. Ozone/HDSL does not support popular
HDFS features like erasure coding, encryption, high-availability,
snapshots, hflush/hsync (and thus HBase), or APIs like WebHDFS or NFS. This
means that Ozone feels like a new, different system that could reasonably
be deployed and tested separately from HDFS. It's unlikely to replace many
of today's HDFS deployments, and from what I understand, Ozone was not
designed to do this.Second, the NameNode refactoring for HDFS-on-HDSL by
itself is a major undertaking. The discussion on HDFS-10419 is still
ongoing so it’s not clear what the ultimate refactoring will be, but I do
know that the earlier FSN/BM refactoring during 2.x was very painful
(introducing new bugs and making backports difficult) and probably should
have been deferred to a new major release instead. I think this refactoring
is important for the long-term maintainability of the NN and worth
pursuing, but as a Hadoop 4.0 item. Merging HDSL is also not a prerequisite
for starting this refactoring. Really, I see the refactoring as the
prerequisite for HDFS-on-HDSL to be possible.Finally, I earnestly believe
that Ozone/HDSL itself would benefit from being a separate project. Ozone
could release faster and iterate more quickly if it wasn't hampered by
Hadoop's release schedule and security and compatibility requirements.
There are also publicity and community benefits; it's an opportunity to
build a community focused on the novel capabilities and architectural
choices of Ozone/HDSL. There are examples of other projects that were
"incubated" on a branch in the Hadoop repo before being spun off to great
success.In conclusion, I'd like to see Ozone succeeding and thriving as a
separate project. Meanwhile, we can work on the HDFS refactoring required
to separate the FSN and BM and make it pluggable. At that point (likely in
the Hadoop 4 timeframe), we'll be ready to pursue HDFS-on-HDSL integration.*
Best,
Andrew

On Mon, Feb 26, 2018 at 1:18 PM, Jitendra Pandey 
wrote:

> Dear folks,
>We would like to start a vote to merge HDFS-7240 branch into
> trunk. The context can be reviewed in the DISCUSSION thread, and in the
> jiras (See references below).
>
> HDFS-7240 introduces Hadoop Distributed Storage Layer (HDSL), which is
> a distributed, replicated block layer.
> The old HDFS namespace and NN can be connected to this new block layer
> as we have described in HDFS-10419.
> We also introduce a key-value namespace called Ozone built on HDSL.
>
> The code is in a separate module and is turned off by default. In a
> secure setup, HDSL and Ozone daemons cannot be started.
>
> The detailed documentation is available at
>  https://cwiki.apache.org/confluence/display/HADOOP/
> Hadoop+Distributed+Storage+Layer+and+Applications
>
>
> I will start with my vote.
> +1 (binding)
>
>
> Discussion Thread:
>   https://s.apache.org/7240-merge
>   https://s.apache.org/4sfU
>
> Jiras:
>https://issues.apache.org/jira/browse/HDFS-7240
>https://issues.apache.org/jira/browse/HDFS-10419
>https://issues.apache.org/jira/browse/HDFS-13074
>https://issues.apache.org/jira/browse/HDFS-13180
>
>
> Thanks
> jitendra
>
>
>
>
>
> DISCUSSION THREAD SUMMARY :
>
> On 2/13/18, 6:28 PM, "sanjay Radia" 
> wrote:
>
> Sorry the formatting got messed by my email client.  Here
> it is again
>
>
> Dear
>  Hadoop Community Members,
>
>We had multiple community discussions, a few meetings
> in smaller groups and also jira discussions with respect to 

[jira] [Created] (MAPREDUCE-7062) MR job tags not compatible with YARN ATSv2 flow names, flow run ids and flow versions

2018-02-27 Thread Charan Hebri (JIRA)
Charan Hebri created MAPREDUCE-7062:
---

 Summary: MR job tags not compatible with YARN ATSv2 flow names, 
flow run ids and flow versions
 Key: MAPREDUCE-7062
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7062
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Charan Hebri


When applications are submitted to YARN, tags are generated in the format 
TIMELINE_FLOW_NAME_TAG:\{flow_name},TIMELINE_FLOW_VERSION_TAG:\{flow_version},
TIMELINE_FLOW_RUN_ID_TAG:\{flow_run_id}

However, MR applications don't follow this format and the tags submitted via 
the property mapreduce.job.tags are of the format,
{flow_name},\{flow_version},\{flow_run_id}

Due to this, YARN falls back to default values for flow name, flow version and 
flow run id which in turn are used in ATSv2.

There are 2 approaches that could be taken to make MR tags compatible with 
ATSv2,

Fix in the MR code
-
Prefix any tags specified with the ones needed by the YARN Timeline Service v2. 
But MR is legacy code and hence these changes could affect how users are using 
these tags.

Add a note in mapred-default.xml

Add notes in the property name, mapreduce.job.tags mentioning that for purposes 
of ATSv2, prefixes need to be added to the tag names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org



Re: [DISCUSS] 2.9+ stabilization branch

2018-02-27 Thread Andrew Wang
Hi Konst and all,

Is there a list of 3.0 specific upgrade concerns that you could share? I
understand that a new major release comes with risk simply due to the
amount of code change, but we've done our best as a community to alleviate
these concerns through much improved integration testing and compatibility
efforts like the shaded client and revamped compat guide. I'd love to hear
about what else we can do here to improve our 3.x upgrade story.

I understand the need for a bridge release as an upgrade path to 3.x, but I
want to make sure we don't end up needing a 2.11 or 2.12 also. The scope
mentioned here isn't really bridging improvements, which in in my mind are
compatibility improvements that help with running 2.x and 3.x clients
concurrently to enable a later upgrade to just 3.x. Including new features
makes this harder (or at least not easier), and means more ongoing
maintenance work on 2.x.

So, a hearty +1 to your closing statement: if we're going to do a bridge
release, let's do it right and do it once.

Best,
Andrew

On Tue, Feb 27, 2018 at 6:21 PM, Konstantin Shvachko 
wrote:

> Thanks Subru for initiating the thread about GPU support.
> I think the path of taking 2.9 as a base for 2.10 and adding new resource
> types into it is quite reasonable.
> That way we can combine stabilization effort on 2.9 with GPUs.
>
> Arun, upgrading Java is probably a separate topic.
> We should discuss it on a separate followup thread if we agree to add GPU
> support into 2.10.
>
> Andrew, we actually ran a small 3.0 cluster to experiment with Tensorflow
> on YARN with gpu resources. It worked well! Therefore the interest.
> Although given the breadth (and the quantity) of our use cases it is
> infeasible to jump directly to 3.0, as Jonathan explained.
> A transitional stage such as 2.10 will be required. Probably the same for
> many other big-cluster folks.
> It would be great if people who run different hadoop versions <= 2.8 can
> converge at 2.10 bridge, to help cross over to 3.
> GPU support would be a serious catalyst for us to move forward, which I
> also heard from other organizations interested in ML.
>
> Thanks,
> --Konstantin
>
> On Tue, Feb 27, 2018 at 1:28 PM, Andrew Wang 
> wrote:
>
>> Hi Arun/Subru,
>>
>> Bumping the minimum Java version is a major change, and incompatible for
>> users who are unable to upgrade their JVM version. We're beyond the EOL
>> for
>> Java 7, but as we know from our experience with Java 6, there are plenty
>> of
>> users who stick on old Java versions. Bumping the Java version also makes
>> backports more difficult, and we're still maintaining a number of older
>> 2.x
>> releases. I think this is too big for a minor release, particularly when
>> we
>> have 3.x as an option that fully supports Java 8.
>>
>> What's the rationale for bumping it here?
>>
>> I'm also curious if there are known issues with 3.x that we can fix to
>> make
>> 3.x upgrades smoother. I would prefer improving the upgrade experience to
>> backporting major features to 2.x since 3.x is meant to be the delivery
>> vehicle for new features beyond the ones named here.
>>
>> Best,
>> Andrew
>>
>> On Tue, Feb 27, 2018 at 11:01 AM, Arun Suresh  wrote:
>>
>> > Hello folks
>> >
>> > We also think this bridging release opens up an opportunity to bump the
>> > java version in branch-2 to java 8.
>> > Would really love to hear thoughts on that.
>> >
>> > Cheers
>> > -Arun/Subru
>> >
>> >
>> > On Mon, Feb 26, 2018 at 5:18 PM, Jonathan Hung 
>> > wrote:
>> >
>> > > Hi Subru,
>> > >
>> > > Thanks for starting the discussion.
>> > >
>> > > We (LinkedIn) have an immediate need for resource types and native GPU
>> > > support. Given we are running 2.7 on our main clusters, we decided to
>> > avoid
>> > > deploying hadoop 3.x on our machine learning clusters (and having to
>> > > support two very different hadoop versions). Since for us there is
>> > > considerable risk and work involved in upgrading to hadoop 3, I think
>> > > having a branch-2.10 bridge release for porting important hadoop 3
>> > features
>> > > to branch-2 is a good idea.
>> > >
>> > > Thanks,
>> > >
>> > >
>> > > Jonathan Hung
>> > >
>> > > On Mon, Feb 26, 2018 at 2:37 PM, Subru Krishnan 
>> > wrote:
>> > >
>> > > > Folks,
>> > > >
>> > > > We (i.e. Microsoft) have started stabilization of 2.9 for our
>> > production
>> > > > deployment. During planning, we realized that we need to backport
>> 3.x
>> > > > features to support GPUs (and more resource types like network IO)
>> > > natively
>> > > > as part of the upgrade. We'd like to share that work with the
>> > community.
>> > > >
>> > > > Instead of stabilizing the base release and cherry-picking fixes
>> back
>> > to
>> > > > Apache, we want to work publicly and push fixes directly into
>> > > > trunk/.../branch-2 for a stable 2.10.0 release. Our goal is to
>> create a
>> > > > bridge 

Re: [DISCUSS] 2.9+ stabilization branch

2018-02-27 Thread Konstantin Shvachko
Thanks Subru for initiating the thread about GPU support.
I think the path of taking 2.9 as a base for 2.10 and adding new resource
types into it is quite reasonable.
That way we can combine stabilization effort on 2.9 with GPUs.

Arun, upgrading Java is probably a separate topic.
We should discuss it on a separate followup thread if we agree to add GPU
support into 2.10.

Andrew, we actually ran a small 3.0 cluster to experiment with Tensorflow
on YARN with gpu resources. It worked well! Therefore the interest.
Although given the breadth (and the quantity) of our use cases it is
infeasible to jump directly to 3.0, as Jonathan explained.
A transitional stage such as 2.10 will be required. Probably the same for
many other big-cluster folks.
It would be great if people who run different hadoop versions <= 2.8 can
converge at 2.10 bridge, to help cross over to 3.
GPU support would be a serious catalyst for us to move forward, which I
also heard from other organizations interested in ML.

Thanks,
--Konstantin

On Tue, Feb 27, 2018 at 1:28 PM, Andrew Wang 
wrote:

> Hi Arun/Subru,
>
> Bumping the minimum Java version is a major change, and incompatible for
> users who are unable to upgrade their JVM version. We're beyond the EOL for
> Java 7, but as we know from our experience with Java 6, there are plenty of
> users who stick on old Java versions. Bumping the Java version also makes
> backports more difficult, and we're still maintaining a number of older 2.x
> releases. I think this is too big for a minor release, particularly when we
> have 3.x as an option that fully supports Java 8.
>
> What's the rationale for bumping it here?
>
> I'm also curious if there are known issues with 3.x that we can fix to make
> 3.x upgrades smoother. I would prefer improving the upgrade experience to
> backporting major features to 2.x since 3.x is meant to be the delivery
> vehicle for new features beyond the ones named here.
>
> Best,
> Andrew
>
> On Tue, Feb 27, 2018 at 11:01 AM, Arun Suresh  wrote:
>
> > Hello folks
> >
> > We also think this bridging release opens up an opportunity to bump the
> > java version in branch-2 to java 8.
> > Would really love to hear thoughts on that.
> >
> > Cheers
> > -Arun/Subru
> >
> >
> > On Mon, Feb 26, 2018 at 5:18 PM, Jonathan Hung 
> > wrote:
> >
> > > Hi Subru,
> > >
> > > Thanks for starting the discussion.
> > >
> > > We (LinkedIn) have an immediate need for resource types and native GPU
> > > support. Given we are running 2.7 on our main clusters, we decided to
> > avoid
> > > deploying hadoop 3.x on our machine learning clusters (and having to
> > > support two very different hadoop versions). Since for us there is
> > > considerable risk and work involved in upgrading to hadoop 3, I think
> > > having a branch-2.10 bridge release for porting important hadoop 3
> > features
> > > to branch-2 is a good idea.
> > >
> > > Thanks,
> > >
> > >
> > > Jonathan Hung
> > >
> > > On Mon, Feb 26, 2018 at 2:37 PM, Subru Krishnan 
> > wrote:
> > >
> > > > Folks,
> > > >
> > > > We (i.e. Microsoft) have started stabilization of 2.9 for our
> > production
> > > > deployment. During planning, we realized that we need to backport 3.x
> > > > features to support GPUs (and more resource types like network IO)
> > > natively
> > > > as part of the upgrade. We'd like to share that work with the
> > community.
> > > >
> > > > Instead of stabilizing the base release and cherry-picking fixes back
> > to
> > > > Apache, we want to work publicly and push fixes directly into
> > > > trunk/.../branch-2 for a stable 2.10.0 release. Our goal is to
> create a
> > > > bridge release for our production clusters to the 3.x series and to
> > > address
> > > > scalability problems in large clusters (N*10k nodes). As we find
> > issues,
> > > we
> > > > will file JIRAs and track resolution of significant
> regressions/faults
> > in
> > > > wiki. Moreover, LinkedIn also has committed plans for a production
> > > > deployment of the same branch. We welcome broad participation,
> > > particularly
> > > > since we'll be stabilizing relatively new features.
> > > >
> > > > The exact list of features we would like to backport in YARN are:
> > > >
> > > >- Support for Resource types [1][2]
> > > >- Native support for GPUs[3]
> > > >- Absolute Resource configuration in CapacityScheduler [4]
> > > >
> > > >
> > > > With regards to HDFS, we are currently looking at mainly fixes to
> > Router
> > > > based Federation and Windows specific fixes which should anyways flow
> > > > normally.
> > > >
> > > > Thoughts?
> > > >
> > > > Thanks,
> > > > Subru/Arun
> > > >
> > > > [1] https://www.mail-archive.com/yarn-dev@hadoop.apache.org/
> > > msg27786.html
> > > > [2] https://www.mail-archive.com/yarn-dev@hadoop.apache.org/
> > > msg28281.html
> > > > [3] https://issues.apache.org/jira/browse/YARN-6223
> > > > [4] 

Re: [DISCUSS] 2.9+ stabilization branch

2018-02-27 Thread Andrew Wang
Hi Arun/Subru,

Bumping the minimum Java version is a major change, and incompatible for
users who are unable to upgrade their JVM version. We're beyond the EOL for
Java 7, but as we know from our experience with Java 6, there are plenty of
users who stick on old Java versions. Bumping the Java version also makes
backports more difficult, and we're still maintaining a number of older 2.x
releases. I think this is too big for a minor release, particularly when we
have 3.x as an option that fully supports Java 8.

What's the rationale for bumping it here?

I'm also curious if there are known issues with 3.x that we can fix to make
3.x upgrades smoother. I would prefer improving the upgrade experience to
backporting major features to 2.x since 3.x is meant to be the delivery
vehicle for new features beyond the ones named here.

Best,
Andrew

On Tue, Feb 27, 2018 at 11:01 AM, Arun Suresh  wrote:

> Hello folks
>
> We also think this bridging release opens up an opportunity to bump the
> java version in branch-2 to java 8.
> Would really love to hear thoughts on that.
>
> Cheers
> -Arun/Subru
>
>
> On Mon, Feb 26, 2018 at 5:18 PM, Jonathan Hung 
> wrote:
>
> > Hi Subru,
> >
> > Thanks for starting the discussion.
> >
> > We (LinkedIn) have an immediate need for resource types and native GPU
> > support. Given we are running 2.7 on our main clusters, we decided to
> avoid
> > deploying hadoop 3.x on our machine learning clusters (and having to
> > support two very different hadoop versions). Since for us there is
> > considerable risk and work involved in upgrading to hadoop 3, I think
> > having a branch-2.10 bridge release for porting important hadoop 3
> features
> > to branch-2 is a good idea.
> >
> > Thanks,
> >
> >
> > Jonathan Hung
> >
> > On Mon, Feb 26, 2018 at 2:37 PM, Subru Krishnan 
> wrote:
> >
> > > Folks,
> > >
> > > We (i.e. Microsoft) have started stabilization of 2.9 for our
> production
> > > deployment. During planning, we realized that we need to backport 3.x
> > > features to support GPUs (and more resource types like network IO)
> > natively
> > > as part of the upgrade. We'd like to share that work with the
> community.
> > >
> > > Instead of stabilizing the base release and cherry-picking fixes back
> to
> > > Apache, we want to work publicly and push fixes directly into
> > > trunk/.../branch-2 for a stable 2.10.0 release. Our goal is to create a
> > > bridge release for our production clusters to the 3.x series and to
> > address
> > > scalability problems in large clusters (N*10k nodes). As we find
> issues,
> > we
> > > will file JIRAs and track resolution of significant regressions/faults
> in
> > > wiki. Moreover, LinkedIn also has committed plans for a production
> > > deployment of the same branch. We welcome broad participation,
> > particularly
> > > since we'll be stabilizing relatively new features.
> > >
> > > The exact list of features we would like to backport in YARN are:
> > >
> > >- Support for Resource types [1][2]
> > >- Native support for GPUs[3]
> > >- Absolute Resource configuration in CapacityScheduler [4]
> > >
> > >
> > > With regards to HDFS, we are currently looking at mainly fixes to
> Router
> > > based Federation and Windows specific fixes which should anyways flow
> > > normally.
> > >
> > > Thoughts?
> > >
> > > Thanks,
> > > Subru/Arun
> > >
> > > [1] https://www.mail-archive.com/yarn-dev@hadoop.apache.org/
> > msg27786.html
> > > [2] https://www.mail-archive.com/yarn-dev@hadoop.apache.org/
> > msg28281.html
> > > [3] https://issues.apache.org/jira/browse/YARN-6223
> > > [4] https://www.mail-archive.com/yarn-dev@hadoop.apache.org/
> > msg28772.html
> > >
> >
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-02-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/705/

[Feb 26, 2018 4:28:04 PM] (kihwal) HDFS-12070. Failed block recovery leaves 
files open indefinitely and at
[Feb 26, 2018 8:15:16 PM] (kkaranasos) YARN-7921. Transform a 
PlacementConstraint to a string expression.
[Feb 26, 2018 9:56:34 PM] (arp) HDFS-12781. After Datanode down, In Namenode UI 
Datanode tab is throwing
[Feb 26, 2018 9:56:53 PM] (arp) HADOOP-15265. Exclude json-smart explicitly in 
hadoop-auth avoid being
[Feb 26, 2018 10:32:46 PM] (billie) MAPREDUCE-7010. Make Job History File 
Permissions configurable.
[Feb 26, 2018 11:13:41 PM] (weiy) HDFS-13187. RBF: Fix Routers information 
shown in the web UI.
[Feb 26, 2018 11:49:01 PM] (eyang) YARN-7963.  Updated MockServiceAM unit test 
to prevent test hang.   
[Feb 27, 2018 12:15:00 AM] (shv) HDFS-13145. SBN crash when transition to ANN 
with in-progress edit




-1 overall


The following subsystems voted -1:
findbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
   org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
internal representation by returning Resource.resources At Resource.java:by 
returning Resource.resources At Resource.java:[line 234] 

Failed junit tests :

   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.hdfs.TestPersistBlocks 
   hadoop.hdfs.TestDFSUpgradeFromImage 
   hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 
   hadoop.hdfs.TestFileAppend 
   hadoop.hdfs.TestDFSRemove 
   hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 
   hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 
   hadoop.hdfs.TestFileCreationDelete 
   hadoop.hdfs.TestFileCreation 
   hadoop.hdfs.TestHDFSFileSystemContract 
   hadoop.hdfs.TestDFSClientRetries 
   hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 
   hadoop.hdfs.TestDFSStripedOutputStream 
   hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 
   hadoop.hdfs.TestSmallBlock 
   hadoop.hdfs.TestSetrepDecreasing 
   hadoop.hdfs.TestHFlush 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 
   hadoop.hdfs.TestErasureCodingPolicyWithSnapshot 
   hadoop.hdfs.TestReadStripedFileWithDecoding 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestSeekBug 
   hadoop.hdfs.TestDFSInputStream 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 
   hadoop.hdfs.TestErasureCodingPolicies 
   hadoop.hdfs.TestClose 
   hadoop.hdfs.TestReconstructStripedFile 
   hadoop.hdfs.TestBlockMissingException 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 
   hadoop.hdfs.TestDFSStorageStateRecovery 
   hadoop.hdfs.TestSecureEncryptionZoneWithKMS 
   hadoop.hdfs.TestRestartDFS 
   hadoop.hdfs.TestDecommissionWithStriped 
   hadoop.hdfs.TestErasureCodingExerciseAPIs 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.TestReplaceDatanodeOnFailure 
   hadoop.fs.http.server.TestHttpFSServerWebServer 
   hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage 
   hadoop.yarn.client.TestApplicationMasterServiceProtocolForTimelineV2 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.yarn.service.TestYarnNativeServices 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/705/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/705/artifact/out/diff-compile-javac-root.txt
  [280K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/705/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/705/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/705/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs: