[jira] [Created] (YARN-11642) Fix Flask Test TestTimelineAuthFilterForV2#testPutTimelineEntities

2024-01-05 Thread Shilun Fan (Jira)
Shilun Fan created YARN-11642:
-

 Summary: Fix Flask Test 
TestTimelineAuthFilterForV2#testPutTimelineEntities
 Key: YARN-11642
 URL: https://issues.apache.org/jira/browse/YARN-11642
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: timelineservice
Affects Versions: 3.5.0
Reporter: Shilun Fan
Assignee: Shilun Fan


Our current unit tests are all executed in parallel. 
TestTimelineAuthFilterForV2#testPutTimelineEntities will report an error during 
execution:

{code:java}
[main] collector.PerNodeTimelineCollectorsAuxService 
(StringUtils.java:startupShutdownMessage(755)) - failed to register any UNIX 
signal loggers: 
java.lang.IllegalStateException: Can't re-install the signal handlers.
{code}

We can solve this problem by changing static initialization to new Object.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2024-01-05 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/609/

[Jan 3, 2024, 12:49:52 PM] (github) HADOOP-18971. [ABFS] Read and cache file 
footer with fs.azure.footer.read.request.size (#6270)
[Jan 3, 2024, 1:12:38 PM] (github) Add synchronized on lockLeakCheck() because 
threadCountMap is not thread safe. (#6029)
[Jan 3, 2024, 4:07:51 PM] (github) HDFS-17310. DiskBalancer: Enhance the log 
message for submitPlan (#6391) Contributed by Haiyang Hu.
[Jan 4, 2024, 11:53:47 AM] (github) HDFS-17283. Change the name of variable 
SECOND in HdfsClientConfigKeys. (#6339). Contributed by farmmamba.
[Jan 4, 2024, 10:31:53 PM] (github) HDFS-17322. Renames RetryCache#MAX_CAPACITY 
to be MIN_CAPACITY to fit usage.
[Jan 4, 2024, 10:43:11 PM] (github) HDFS-17306. RBF: Router should not return 
nameservices that does not enable observer nodes in RpcResponseHeaderProto 
(#6385)
[Jan 5, 2024, 4:24:10 AM] (github) HDFS-17290: Adds disconnected client rpc 
backoff metrics (#6359)




-1 overall


The following subsystems voted -1:
blanks hadolint mvnsite pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:[line 695] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:[line 138] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:[line 75] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:[line 85] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:[line 130] 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2024-01-05 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1462/

[Jan 4, 2024, 11:53:47 AM] (github) HDFS-17283. Change the name of variable 
SECOND in HdfsClientConfigKeys. (#6339). Contributed by farmmamba.
[Jan 4, 2024, 10:31:53 PM] (github) HDFS-17322. Renames RetryCache#MAX_CAPACITY 
to be MIN_CAPACITY to fit usage.
[Jan 4, 2024, 10:43:11 PM] (github) HDFS-17306. RBF: Router should not return 
nameservices that does not enable observer nodes in RpcResponseHeaderProto 
(#6385)




-1 overall


The following subsystems voted -1:
blanks hadolint pathlen spotbugs xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Dead store to sharedDirs in 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration, boolean, 
boolean) At 
NameNode.java:org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration,
 boolean, boolean) At NameNode.java:[line 1383] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   
org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT 
isn't final but should be At TimelineConnector.java:be At 
TimelineConnector.java:[line 82] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
   
org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT 
isn't final but should be At TimelineConnector.java:be At 
TimelineConnector.java:[line 82] 

spotbugs :

   module:hadoop-hdfs-project 
   Dead store to sharedDirs in 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration, boolean, 
boolean) At 
NameNode.java:org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration,
 boolean, boolean) At NameNode.java:[line 1383] 

spotbugs :

   module:hadoop-yarn-project 
   
org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT 
isn't final but should be At TimelineConnector.java:be At 
TimelineConnector.java:[line 82] 

spotbugs :

   module:root 
   Dead store to sharedDirs in 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration, boolean, 
boolean) At 
NameNode.java:org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration,
 boolean, boolean) At NameNode.java:[line 1383] 
   
org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT 
isn't final but should be At TimelineConnector.java:be At 
TimelineConnector.java:[line 82] 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1462/artifact/out/results-compile-cc-root.txt
 [96K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1462/artifact/out/results-compile-javac-root.txt
 [12K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1462/artifact/out/blanks-eol.txt
 [15M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1462/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1462/artifact/out/results-checkstyle-root.txt
 [13M]

   hadolint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1462/artifact/out/results-hadolint.txt
 [24K]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1462/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1462/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1462/artifact/out/results-shellcheck.txt
 [24K]

   xml:

  

Re: Re: [DISCUSS] Release Hadoop 3.4.0

2024-01-05 Thread Ayush Saxena
Thanx @slfan1989 for volunteering. Please remove this [1] from the new
branches-3.4 & 3.4.0 when you create them as part of preparing for the
release, else it would be putting up trunk labels for backport PRs to
those branches as well.

There are some tickets marked as Critical/Blocker for 3.4.0 [2], just
give a check to them if they are actually critical or not, if yes, we
should get them in. Most of them were not looking relevant to me at
first glance.

-Ayush


[1] https://github.com/apache/hadoop/blob/trunk/.github/labeler.yml
[2] 
https://issues.apache.org/jira/issues/?jql=project%20in%20(HDFS%2C%20HADOOP%2C%20MAPREDUCE%2C%20YARN)%20AND%20priority%20in%20(Blocker%2C%20Critical)%20AND%20resolution%20%3D%20Unresolved%20AND%20affectedVersion%20in%20(3.4.0%2C%203.4.1)


On Thu, 4 Jan 2024 at 19:57, slfan1989  wrote:
>
> Hey all,
>
> We are planning to release Hadoop 3.4.0 base on trunk. I made some 
> preparations and changed the target version of JIRA for non-blockers in 
> HADOOP, HDFS, YARN, and MAPREDUCE from 3.4.0 to 3.5.0. If we want to create a 
> new JIRA, the target version can directly select version 3.5.0.
>
> If you have any thoughts, suggestions, or concerns, please feel free to share 
> them.
>
> Best Regards,
> Shilun Fan.
>
> > +1 from me.
> >> It will include the new AWS V2 SDK upgrade as well.
>
> > On Wed, Jan 3, 2024 at 6:35 AM Xiaoqiao He wrote:
>
> > >
> > > I think the release discussion can be in public ML?
> >
> > Good idea. cc common-dev/hdfs-dev/yarn-dev/mapreduce-dev ML.
> >
> > Best Regards,
> > - He Xiaoqiao
> >
> > On Tue, Jan 2, 2024 at 6:18 AM Ayush Saxena wrote:
> >
> > > +1 from me as well.
> > >
> > > We should definitely attempt to upgrade the thirdparty version for
> > > 3.4.0 & check if there are any pending critical/blocker issues as
> > > well.
> > >
> > > I think the release discussion can be in public ML?
> > >
> > > -Ayush
> > >
> > > On Mon, 1 Jan 2024 at 18:25, Steve Loughran  > >
> > > wrote:
> > > >
> > > > +1 from me
> > > >
> > > > ant and maven repo to build and validate things, including making arm
> > > > binaries if you work from an arm macbook.
> > > > https://github.com/steveloughran/validate-hadoop-client-artifacts
> > > >
> > > > do we need to publish an up to date thirdparty release for this?
> > > >
> > > >
> > > >
> > > > On Mon, 25 Dec 2023 at 16:06, slfan1989 wrote:
> > > >
> > > > > Dear PMC Members,
> > > > >
> > > > > First of all, Merry Christmas to everyone!
> > > > >
> > > > > In our community discussions, we collectively finalized the plan to
> > > release
> > > > > Hadoop 3.4.0 based on the current trunk branch. I am applying to take
> > > on
> > > > > the responsibility for the initial release of version 3.4.0, and the
> > > entire
> > > > > process is set to officially commence in January 2024.
> > > > > I have created a new JIRA: HADOOP-19018. Release 3.4.0.
> > > > >
> > > > > The specific work plan includes:
> > > > >
> > > > > 1. Following the guidance in the HowToRelease document, completing
> > all
> > > the
> > > > > relevant tasks required for the release of version 3.4.0.
> > > > > 2. Pointing the trunk branch to 3.5.0-SNAPSHOT.
> > > > > 3. Currently, the Fix Versions of all tasks merged into trunk are set
> > > as
> > > > > 3.4.0; I will move them to 3.5.0.
> > > > >
> > > > > Confirmed features to be included in the release:
> > > > >
> > > > > 1. Enhanced functionality for YARN Federation.
> > > > > 2. Optimization of HDFS RBF.
> > > > > 3. Introduction of fine-grained global locks for DataNodes.
> > > > > 4. Improvements in the stability of HDFS EC, and more.
> > > > > 5. Fixes for important CVEs.
> > > > >
> > > > > If you have any thoughts, suggestions, or concerns, please feel free
> > to
> > > > > share them.
> > > > >
> > > > > Looking forward to a successful release!
> > > > >
> > > > > Best Regards,
> > > > > Shilun Fan.
> > > > >
> > >
> >

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-11641) Can't update a queue hierarchy in absolute mode when the configured capacities are zero

2024-01-05 Thread Tamas Domok (Jira)
Tamas Domok created YARN-11641:
--

 Summary: Can't update a queue hierarchy in absolute mode when the 
configured capacities are zero
 Key: YARN-11641
 URL: https://issues.apache.org/jira/browse/YARN-11641
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler
Affects Versions: 3.4.0
Reporter: Tamas Domok
Assignee: Tamas Domok


h2. Error symptoms

It is not possible to modify a queue hierarchy in absolute mode when the parent 
or every child queue of the parent has 0 min resource configured.

{noformat}
2024-01-05 15:38:59,016 INFO 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager:
 Initialized queue: root.a.c
2024-01-05 15:38:59,016 ERROR 
org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices: Exception 
thrown when modifying configuration.
java.io.IOException: Failed to re-init queues : Parent=root.a: When absolute 
minResource is used, we must make sure both parent and child all use absolute 
minResource
{noformat}

h2. Reproduction

capacity-scheduler.xml
{code:xml}


  
yarn.scheduler.capacity.root.queues
default,a
  
  
yarn.scheduler.capacity.root.capacity
[memory=40960, vcores=16]
  
  
yarn.scheduler.capacity.root.default.capacity
[memory=1024, vcores=1]
  
  
yarn.scheduler.capacity.root.default.maximum-capacity
[memory=1024, vcores=1]
  
  
yarn.scheduler.capacity.root.a.capacity
[memory=0, vcores=0]
  
  
yarn.scheduler.capacity.root.a.maximum-capacity
[memory=39936, vcores=15]
  
  
yarn.scheduler.capacity.root.a.queues
b,c
  
  
yarn.scheduler.capacity.root.a.b.capacity
[memory=0, vcores=0]
  
  
yarn.scheduler.capacity.root.a.b.maximum-capacity
[memory=39936, vcores=15]
  
  
yarn.scheduler.capacity.root.a.c.capacity
[memory=0, vcores=0]
  
  
yarn.scheduler.capacity.root.a.c.maximum-capacity
[memory=39936, vcores=15]
  

{code}

{code:xml}



  root.a
  

  capacity
  [memory=1024,vcores=1]


  maximum-capacity
  [memory=39936,vcores=15]

  


{code}

{code}
$ curl -X PUT -H 'Content-Type: application/xml' -d @updatequeue.xml 
http://localhost:8088/ws/v1/cluster/scheduler-conf\?user.name\=yarn
Failed to re-init queues : Parent=root.a: When absolute minResource is used, we 
must make sure both parent and child all use absolute minResource
{code}

h2. Root cause

setChildQueues is called during reinit, where:

{code:java}
  void setChildQueues(Collection childQueues) throws IOException {
writeLock.lock();
try {
  boolean isLegacyQueueMode = 
queueContext.getConfiguration().isLegacyQueueMode();
  if (isLegacyQueueMode) {
QueueCapacityType childrenCapacityType =
getCapacityConfigurationTypeForQueues(childQueues);
QueueCapacityType parentCapacityType =
getCapacityConfigurationTypeForQueues(ImmutableList.of(this));

if (childrenCapacityType == QueueCapacityType.ABSOLUTE_RESOURCE
|| parentCapacityType == QueueCapacityType.ABSOLUTE_RESOURCE) {
  // We don't allow any mixed absolute + {weight, percentage} between
  // children and parent
  if (childrenCapacityType != parentCapacityType && !this.getQueuePath()
  .equals(CapacitySchedulerConfiguration.ROOT)) {
throw new IOException("Parent=" + this.getQueuePath()
+ ": When absolute minResource is used, we must make sure both "
+ "parent and child all use absolute minResource");
  }
{code}

The parent or childrenCapacityType will be considered as PERCENTAGE, because 
getCapacityConfigurationTypeForQueues fails to detect the absolute mode, here:

{code:java}
if (!queue.getQueueResourceQuotas().getConfiguredMinResource(nodeLabel)
.equals(Resources.none())) {
  absoluteMinResSet = true;
{code}

h2. Possible fixes

Possible fix in AbstractParentQueue.getCapacityConfigurationTypeForQueues using 
the capacityVector:
{code:java}
for (CSQueue queue : queues) {
  for (String nodeLabel : queueCapacities.getExistingNodeLabels()) {
Set definedCapacityTypes =

queue.getConfiguredCapacityVector(nodeLabel).getDefinedCapacityTypes();
if (definedCapacityTypes.size() == 1) {
  QueueCapacityVector.ResourceUnitCapacityType next = 
definedCapacityTypes.iterator().next();
  if (Objects.requireNonNull(next) == PERCENTAGE) {
percentageIsSet = true;
diagMsg.append("{Queue=").append(queue.getQueuePath()).append(", 
label=").append(nodeLabel)
.append(" uses percentage mode}. ");
  } else if (next == 
QueueCapacityVector.ResourceUnitCapacityType.ABSOLUTE) {
absoluteMinResSet = true;

Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2024-01-05 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1262/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.TestLeaseRecovery2 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.TestFileLengthOnClusterRestart 
   hadoop.hdfs.server.namenode.ha.TestPipelinesFailover 
   hadoop.hdfs.TestDFSInotifyEventInputStream 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.mapred.TestLineRecordReader 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.yarn.sls.TestSLSRunner 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceAllocator
 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceHandlerImpl
 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1262/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1262/artifact/out/diff-compile-javac-root.txt
  [488K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1262/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1262/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1262/artifact/out/patch-mvnsite-root.txt
  [572K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1262/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1262/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1262/artifact/out/diff-patch-shellcheck.txt
  [72K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1262/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1262/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1262/artifact/out/patch-javadoc-root.txt
  [36K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1262/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [220K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1262/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [464K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1262/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1262/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1262/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1262/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1262/artifact/out/patch-unit-hadoop-tools_hadoop-resourceestimator.txt