Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-06-20 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/

[Jun 19, 2018 5:38:13 PM] (eyang) HADOOP-15527.  Improve delay check for 
stopping processes.  




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 
   Inconsistent synchronization of 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadListener;
 locked 75% of time Unsynchronized access at 
AllocationFileLoaderService.java:75% of time Unsynchronized access at 
AllocationFileLoaderService.java:[line 117] 

Failed junit tests :

   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   hadoop.mapred.TestMRTimelineEventHandling 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/diff-compile-javac-root.txt
  [352K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/diff-checkstyle-root.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [72K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [68K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/branch-findbugs-hadoop-tools_hadoop-ozone.txt
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/817/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   

[jira] [Created] (YARN-8445) YARN native service doesn't allow service name equals to component name

2018-06-20 Thread Chandni Singh (JIRA)
Chandni Singh created YARN-8445:
---

 Summary: YARN native service doesn't allow service name equals to 
component name
 Key: YARN-8445
 URL: https://issues.apache.org/jira/browse/YARN-8445
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Chandni Singh
Assignee: Chandni Singh
 Fix For: 3.1.1


Now YARN service doesn't allow specifying service name equals to component name.

And it causes AM launch fails with msg like:

{code} 
org.apache.hadoop.metrics2.MetricsException: Metrics source tf-zeppelin already 
exists!
 at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152)
 at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125)
 at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229)
 at 
org.apache.hadoop.yarn.service.ServiceMetrics.register(ServiceMetrics.java:75)
 at 
org.apache.hadoop.yarn.service.component.Component.(Component.java:193)
 at 
org.apache.hadoop.yarn.service.ServiceScheduler.createAllComponents(ServiceScheduler.java:552)
 at 
org.apache.hadoop.yarn.service.ServiceScheduler.buildInstance(ServiceScheduler.java:251)
 at 
org.apache.hadoop.yarn.service.ServiceScheduler.serviceInit(ServiceScheduler.java:283)
 at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
 at 
org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
 at 
org.apache.hadoop.yarn.service.ServiceMaster.serviceInit(ServiceMaster.java:142)
 at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
 at org.apache.hadoop.yarn.service.ServiceMaster.main(ServiceMaster.java:338)
2018-06-18 06:50:39,473 [main] INFO service.ServiceScheduler - Stopping service 
scheduler
{code}

It's better to add this check in validation phase instead of failing AM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-06-20 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/503/

[Jun 18, 2018 11:45:50 PM] (aajisaka) YARN-7668. Remove unused variables from 
ContainerLocalizer
[Jun 19, 2018 5:38:13 PM] (eyang) HADOOP-15527.  Improve delay check for 
stopping processes.  




-1 overall


The following subsystems voted -1:
compile mvninstall pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFileUtil 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.fs.TestLocalFileSystem 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.compress.TestCodec 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestIPC 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestNativeCodeLoader 
   hadoop.fs.viewfs.TestViewFileSystemHdfs 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped 
   hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancer 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   hadoop.hdfs.server.namenode.TestFsck 
   hadoop.hdfs.server.namenode.TestReencryption 
   hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs 
   hadoop.hdfs.TestDFSShell 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.TestDFSUpgradeFromImage 
   hadoop.hdfs.TestFetchImage 
   hadoop.hdfs.TestHDFSFileSystemContract 
   hadoop.hdfs.TestLeaseRecovery 
   hadoop.hdfs.TestLocalDFS 
   hadoop.hdfs.TestMaintenanceState 
   hadoop.hdfs.TestPread 
   hadoop.hdfs.TestReconstructStripedFile 
   hadoop.hdfs.TestSecureEncryptionZoneWithKMS 
   hadoop.hdfs.TestTrashWithSecureEncryptionZones 
   hadoop.hdfs.tools.TestDFSAdmin 
   hadoop.hdfs.web.TestWebHDFS 
   hadoop.hdfs.web.TestWebHdfsUrl 
   hadoop.fs.http.server.TestHttpFSServerWebServer 
   
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch 
   hadoop.yarn.server.nodemanager.containermanager.TestAuxServices 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestContainerExecutor 
   hadoop.yarn.server.nodemanager.TestNodeManagerResync 
   hadoop.yarn.server.webproxy.amfilter.TestAmFilter 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   
hadoop.yarn.server.timeline.security.TestTimelineAuthenticationFilterForV1 
   hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestFSSchedulerConfigurationStore
 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestLeveldbConfigurationStore
 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueManagementDynamicEditPolicy
 
   
hadoop.yarn.server.resourcemanager.scheduler.constraint.TestPlacementProcessor 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestAllocationFileLoaderService
 
   hadoop.yarn.server.resourcemanager.TestResourceTrackerService 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
 

[jira] [Created] (YARN-8444) NodeResourceMonitor crashes on bad swapFree value

2018-06-20 Thread Jim Brennan (JIRA)
Jim Brennan created YARN-8444:
-

 Summary: NodeResourceMonitor crashes on bad swapFree value
 Key: YARN-8444
 URL: https://issues.apache.org/jira/browse/YARN-8444
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.2, 2.8.3
Reporter: Jim Brennan
Assignee: Jim Brennan


Saw this on a node that was having difficulty preempting containers. Can't have 
NodeResourceMonitor exiting. System was above 99% memory used at the time so it 
may only be something that happens when normal preemption isn't work right, but 
we should fix since this is a critical monitor to the health of the node.

 

{noformat}
2018-06-04 14:28:08,539 [Container Monitor] DEBUG ContainersMonitorImpl.audit: 
Memory usage of ProcessTree 110564 for container-id 
container_e24_1526662705797_129647_01_004791: 2.1 GB of 3.5 GB physical memory 
used; 5.0 GB of 7.3 GB virtual memory used
2018-06-04 14:28:10,622 [Node Resource Monitor] ERROR 
yarn.YarnUncaughtExceptionHandler: Thread Thread[Node Resource Monitor,5,main] 
threw an Exception.
java.lang.NumberFormatException: For input string: "18446744073709551596"
 at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 at java.lang.Long.parseLong(Long.java:592)
 at java.lang.Long.parseLong(Long.java:631)
 at 
org.apache.hadoop.util.SysInfoLinux.readProcMemInfoFile(SysInfoLinux.java:257)
 at 
org.apache.hadoop.util.SysInfoLinux.getAvailablePhysicalMemorySize(SysInfoLinux.java:591)
 at 
org.apache.hadoop.util.SysInfoLinux.getAvailableVirtualMemorySize(SysInfoLinux.java:601)
 at 
org.apache.hadoop.yarn.util.ResourceCalculatorPlugin.getAvailableVirtualMemorySize(ResourceCalculatorPlugin.java:74)
 at 
org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl$MonitoringThread.run(NodeResourceMonitorImpl.java:193)
2018-06-04 14:28:30,747 
[org.apache.hadoop.util.JvmPauseMonitor$Monitor@226eba67] INFO 
util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of 
approximately 9330ms
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-8443) Cluster metrics have wrong Total VCores when there is reserved container for CapacityScheduler

2018-06-20 Thread Tao Yang (JIRA)
Tao Yang created YARN-8443:
--

 Summary: Cluster metrics have wrong Total VCores when there is 
reserved container for CapacityScheduler
 Key: YARN-8443
 URL: https://issues.apache.org/jira/browse/YARN-8443
 Project: Hadoop YARN
  Issue Type: Bug
  Components: webapp
Affects Versions: 3.1.0, 2.9.0, 3.2.0
Reporter: Tao Yang
Assignee: Tao Yang


Cluster metrics on the web UI will give wrong Total Vcores when there is 
reserved container for CapacityScheduler.
Reference code:
{code:java|title=ClusterMetricsInfo.java}
if (rs instanceof CapacityScheduler) {
  CapacityScheduler cs = (CapacityScheduler) rs;
  this.totalMB = availableMB + allocatedMB + reservedMB;
  this.totalVirtualCores =
  availableVirtualCores + allocatedVirtualCores + containersReserved;
   ...
}
{code}
The key of this problem is the calculation of totalVirtualCores, 
{{containersReserved}} is the number of reserved containers, not reserved 
VCores. The correct calculation should be {{this.totalVirtualCores = 
availableVirtualCores + allocatedVirtualCores + reservedVirtualCores;}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org