[jira] [Resolved] (YARN-10057) Upgrade the dependencies managed by yarnpkg

2019-12-24 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved YARN-10057.
--
Fix Version/s: 3.3.0
   Resolution: Fixed

Merged the PR into trunk.

> Upgrade the dependencies managed by yarnpkg
> ---
>
> Key: YARN-10057
> URL: https://issues.apache.org/jira/browse/YARN-10057
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: build, yarn-ui-v2
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0
>
>
> Run "yarn upgrade" to update the dependencies managed by yarnpkg.
> Dependabot automatically created the following pull requests and this issue 
> is to close them.
> * https://github.com/apache/hadoop/pull/1741
> * https://github.com/apache/hadoop/pull/1742
> * https://github.com/apache/hadoop/pull/1743
> * https://github.com/apache/hadoop/pull/1744



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-12-24 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/546/

No changes




-1 overall


The following subsystems voted -1:
docker


Powered by Apache Yetus 0.8.0   http://yetus.apache.org

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-12-24 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1361/

[Dec 24, 2019 8:08:34 PM] (shv) HDFS-15076. Fix tests that hold FSDirectory 
lock, without holding

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-12-24 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1360/

[Dec 23, 2019 5:08:14 AM] (github) YARN-10054. Upgrade yarn to 1.21.1 in 
Dockerfile. (#1777)
[Dec 23, 2019 10:45:47 AM] (github) YARN-10055. bower install fails. (#1778)




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   hadoop.hdfs.TestDFSStripedOutputStream 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy 
   hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor 
   hadoop.hdfs.TestReconstructStripedFile 
   hadoop.hdfs.TestFileChecksum 
   hadoop.hdfs.server.namenode.TestRedudantBlocks 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots 
   hadoop.hdfs.TestDecommissionWithStriped 
   hadoop.hdfs.TestFileChecksumCompositeCrc 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   

[jira] [Created] (YARN-10060) Historyserver may recover too slow since JobHistory init too slow when there exist too many job

2019-12-24 Thread zhoukang (Jira)
zhoukang created YARN-10060:
---

 Summary: Historyserver may recover too slow since JobHistory init 
too slow when there exist too many job
 Key: YARN-10060
 URL: https://issues.apache.org/jira/browse/YARN-10060
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: yarn
Reporter: zhoukang
Assignee: zhoukang


Like below it cost >7min to listen to the service port

{code:java}
2019-12-24,20:01:37,272 INFO org.apache.zookeeper.ClientCnxn: EventThread shut 
down
2019-12-24,20:01:47,354 INFO 
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager: Initializing Existing 
Jobs...

2019-12-24,20:08:29,589 INFO org.apache.zookeeper.ClientCnxn: Opening socket 
connection to server zjy-hadoop-prc-ct07.bj/10.152.50.2:11000. Will not attempt 
to authenticate using SASL (unknown error)
2019-12-24,20:08:29,589 INFO org.apache.zookeeper.ClientCnxn: Socket connection 
established to zjy-hadoop-prc-ct07.bj/10.152.50.2:11000, initiating session
2019-12-24,20:08:29,590 INFO org.apache.zookeeper.ClientCnxn: Session 
establishment complete on server zjy-hadoop-prc-ct07.bj/10.152.50.2:11000, 
sessionid = 0x66d1a13e596ddc9, negotiated timeout = 5000
2019-12-24,20:08:29,593 INFO org.apache.zookeeper.ZooKeeper: Session: 
0x66d1a13e596ddc9 closed
2019-12-24,20:08:29,593 INFO org.apache.zookeeper.ClientCnxn: EventThread shut 
down
2019-12-24,20:08:29,655 INFO 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage: CachedHistoryStorage 
Init
2019-12-24,20:08:29,681 INFO org.apache.hadoop.ipc.CallQueueManager: Using 
callQueue class java.util.concurrent.LinkedBlockingQueue
2019-12-24,20:08:29,715 INFO org.apache.hadoop.ipc.CallQueueManager: Using 
callQueue class java.util.concurrent.LinkedBlockingQueue
2019-12-24,20:08:29,800 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: 
loaded properties from hadoop-metrics2.properties
2019-12-24,20:08:29,943 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
Scheduled snapshot period at 10 second(s).
2019-12-24,20:08:29,943 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
JobHistoryServer metrics system started
2019-12-24,20:08:29,950 INFO 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
 Updating the current master key for generating delegation tokens
2019-12-24,20:08:29,951 INFO 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
 Starting expired delegation token remover thread, tokenRemoverScanInterval=60 
min(s)
2019-12-24,20:08:29,952 INFO 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
 Updating the current master key for generating delegation tokens
2019-12-24,20:08:30,015 INFO org.apache.hadoop.http.HttpRequestLog: Http 
request log for http.requests.jobhistory is not defined
2019-12-24,20:08:30,025 INFO org.apache.hadoop.http.HttpServer2: Added global 
filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2019-12-24,20:08:30,027 INFO org.apache.hadoop.http.HttpServer2: Added filter 
static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context jobhistory
2019-12-24,20:08:30,027 INFO org.apache.hadoop.http.HttpServer2: Added filter 
static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context static
2019-12-24,20:08:30,030 INFO org.apache.hadoop.http.HttpServer2: adding path 
spec: /jobhistory/*
2019-12-24,20:08:30,030 INFO org.apache.hadoop.http.HttpServer2: adding path 
spec: /ws/*
2019-12-24,20:08:30,057 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to 
port 20901
2019-12-24,20:08:30,939 INFO org.apache.hadoop.yarn.webapp.WebApps: Web app 
/jobhistory started at 20901
2019-12-24,20:08:31,177 INFO org.apache.hadoop.yarn.webapp.WebApps: Registered 
webapp guice modules
2019-12-24,20:08:31,187 INFO org.apache.hadoop.ipc.CallQueueManager: Using 
callQueue class java.util.concurrent.LinkedBlockingQueue
2019-12-24,20:08:31,187 INFO org.apache.hadoop.ipc.CallQueueManager: Using 
callQueue class java.util.concurrent.LinkedBlockingQueue
2019-12-24,20:08:31,189 INFO 
org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding 
protocol org.apache.hadoop.mapreduce.v2.api.HSClientProtocolPB to the server
2019-12-24,20:08:31,216 INFO 
org.apache.hadoop.mapreduce.v2.hs.HistoryClientService: Instantiated 
HistoryClientService at zjy-hadoop-prc-ct11.bj/10.152.50.42:20900
2019-12-24,20:08:31,344 INFO 
org.apache.hadoop.yarn.logaggregation.AggregatedLogDeletionService: aggregated 
log deletion started.
2019-12-24,20:08:31,690 INFO org.apache.zookeeper.ZooKeeper: Initiating client 
connection, connectString=zjyprc.observer.zk.hadoop.srv:11000 
sessionTimeout=5000 watcher=org
{code}

{code:java}
protected void serviceInit(Configuration conf) throws Exception {
LOG.info("JobHistory Init");
this.conf = conf;
this.appID =