[jira] [Commented] (YARN-9399) Yarn Client may use stale DNS to connect to RM

2019-03-22 Thread Fengnan Li (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799531#comment-16799531
 ] 

Fengnan Li commented on YARN-9399:
--

[~elgoiri] [~xianliangz] This is an interesting issue.

I think the solution depends on where the cache is kept. After a little 
research I found this article: 
[https://www-01.ibm.com/support/docview.wss?uid=swg21207534]

and it seems the cache is inside InetAddress, which InetSocketAddress also uses.

[~xianliangz] Can we try to set the cache ttl with JVM to make the DNS cache 
expire much quicker?

It seems some OS does the DNS cache itself and if that's the case then we 
probably need to find some config to tune the system.

> Yarn Client may use stale DNS to connect to RM
> --
>
> Key: YARN-9399
> URL: https://issues.apache.org/jira/browse/YARN-9399
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.1
>Reporter: Leon zhang
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: patch
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> This happens more frequently when running yarn in Kubernetes. When yarn 
> client try to connect to RM, if the DNS of RM is not resovable due to 
> kube-dns failure or not ready, the yarn client will initaize itself with 
> unresoved InetSocketAddress in RMProxy#newProxyInstance(). The connect to RM 
> will fail with UnknownHostException. Yarn client will retry the connection by 
> RetryProxy by it always use the cached unresolved InetSocketAddress. The 
> retry will never success. When RM is reschdured to another kubernetes node, 
> which changed the RM ip, this bug will also happen. Currently the work around 
> is to restarting the Yarn client. 
> This issue happens in both HA and non-HA of RM. HDFS has simialr issues. 
> [https://github.com/apache-spark-on-k8s/kubernetes-HDFS/issues/48]
> I propose to add a new RMFailoverProxyProvider called 
> AutoRefreshRMFailoverProxyProvider which will resove the DNS in the 
> overwriten function getProxy(). This way, RetryProxy can resolve the DNS each 
> time it retry. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7129) Application Catalog for YARN applications

2019-03-22 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799530#comment-16799530
 ] 

Eric Yang commented on YARN-7129:
-

Patch 034 fixed mvn site issue.  The failed hdfs test cases are not related to 
this patch.

> Application Catalog for YARN applications
> -
>
> Key: YARN-7129
> URL: https://issues.apache.org/jira/browse/YARN-7129
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN Appstore.pdf, YARN-7129.001.patch, 
> YARN-7129.002.patch, YARN-7129.003.patch, YARN-7129.004.patch, 
> YARN-7129.005.patch, YARN-7129.006.patch, YARN-7129.007.patch, 
> YARN-7129.008.patch, YARN-7129.009.patch, YARN-7129.010.patch, 
> YARN-7129.011.patch, YARN-7129.012.patch, YARN-7129.013.patch, 
> YARN-7129.014.patch, YARN-7129.015.patch, YARN-7129.016.patch, 
> YARN-7129.017.patch, YARN-7129.018.patch, YARN-7129.019.patch, 
> YARN-7129.020.patch, YARN-7129.021.patch, YARN-7129.022.patch, 
> YARN-7129.023.patch, YARN-7129.024.patch, YARN-7129.025.patch, 
> YARN-7129.026.patch, YARN-7129.027.patch, YARN-7129.028.patch, 
> YARN-7129.029.patch, YARN-7129.030.patch, YARN-7129.031.patch, 
> YARN-7129.032.patch, YARN-7129.033.patch, YARN-7129.034.patch
>
>
> YARN native services provides web services API to improve usability of 
> application deployment on Hadoop using collection of docker images.  It would 
> be nice to have an application catalog system which provides an editorial and 
> search interface for YARN applications.  This improves usability of YARN for 
> manage the life cycle of applications.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7129) Application Catalog for YARN applications

2019-03-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799529#comment-16799529
 ] 

Hadoop QA commented on YARN-7129:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 40s{color} | {color:orange} root: The patch generated 3 new + 4 unchanged - 
0 fixed = 7 total (was 4) {color} |
| {color:green}+1{color} | {color:green} hadolint {color} | {color:green}  0m  
1s{color} | {color:green} There were no new hadolint issues. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:orange}-0{color} | {color:orange} shelldocs {color} | {color:orange}  
0m 13s{color} | {color:orange} The patch generated 136 new + 104 unchanged - 0 
fixed = 240 total (was 104) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
14s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-docker
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
34s{color} | {color:green} the patch passed {color} |
|| || || || 

[jira] [Commented] (YARN-9227) DistributedShell RelativePath is not removed at end

2019-03-22 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799525#comment-16799525
 ] 

Prabhu Joseph commented on YARN-9227:
-

[~giovanni.fumarola] Can you review this jira when you get time - this fixes 
cleanup of staging directory for Distributed Shell Job.

> DistributedShell RelativePath is not removed at end
> ---
>
> Key: YARN-9227
> URL: https://issues.apache.org/jira/browse/YARN-9227
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: distributed-shell
>Affects Versions: 3.1.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Attachments: 0001-YARN-9227.patch, 0002-YARN-9227.patch, 
> 0003-YARN-9227.patch
>
>
> DistributedShell Job does not remove the relative path which contains jars 
> and localized files.
> {code}
> [ambari-qa@ash hadoop-yarn]$ hadoop fs -ls 
> /user/ambari-qa/DistributedShell/application_1542665708563_0017
> Found 2 items
> -rw-r--r--   3 ambari-qa hdfs  46636 2019-01-23 13:37 
> /user/ambari-qa/DistributedShell/application_1542665708563_0017/AppMaster.jar
> -rwx--x---   3 ambari-qa hdfs  4 2019-01-23 13:37 
> /user/ambari-qa/DistributedShell/application_1542665708563_0017/shellCommands
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9400) Remove unnecessary if at EntityGroupFSTimelineStore#parseApplicationId

2019-03-22 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799524#comment-16799524
 ] 

Prabhu Joseph commented on YARN-9400:
-

[~giovanni.fumarola] Can you review this jira - this removes an unnecessary if 
statement.

> Remove unnecessary if at EntityGroupFSTimelineStore#parseApplicationId
> --
>
> Key: YARN-9400
> URL: https://issues.apache.org/jira/browse/YARN-9400
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Attachments: YARN-9400-001.patch
>
>
> If clause to validate whether appIdStr starts with "application" is not 
> required at EntityGroupFSTimelineStore#parseApplicationId
> {code}
>  // converts the String to an ApplicationId or null if conversion failed
>   private static ApplicationId parseApplicationId(String appIdStr) {
> ApplicationId appId = null;
> if (appIdStr.startsWith(ApplicationId.appIdStrPrefix)) {
>   try {
> appId = ApplicationId.fromString(appIdStr);
>   } catch (IllegalArgumentException e) {
> appId = null;
>   }
> }
> return appId;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9404) TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails intermittent

2019-03-22 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799523#comment-16799523
 ] 

Prabhu Joseph commented on YARN-9404:
-

Thanks [~giovanni.fumarola]!

> TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails 
> intermittent
> 
>
> Key: YARN-9404
> URL: https://issues.apache.org/jira/browse/YARN-9404
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: resourcemanager
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9404-001.patch
>
>
> TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails 
> intermittent. 
> {code}
> [ERROR] 
> testApplicationLifetimeMonitor[0](org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor)
>  Time elapsed: 34.75 s <<< FAILURE! java.lang.AssertionError: Application 
> killed before lifetime value at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor(TestApplicationLifetimeMonitor.java:209)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.lang.Thread.run(Thread.java:748)
> {code}
> As per testcase logs, submittime is 1553240813597 and finishtime is 
> 1553240844372. The testcase does (finishtime - submittime) / 1000 = 30775 / 
> 1000 = 30 and loses the decimal, 775 ms.
> {code}
> 2019-03-22 07:47:24,357 INFO  [Ping Checker] util.AbstractLivelinessMonitor 
> (AbstractLivelinessMonitor.java:run(149)) - 
> Expired:application_1553240811329_0004_LIFETIME Timed out after 0 secs
> 2019-03-22 07:47:24,384 INFO  [AsyncDispatcher event handler] 
> resourcemanager.RMAppManager$ApplicationSummary 
> (RMAppManager.java:logAppSummary(219)) - 
> appId=application_1553240811329_0004,name=,user=jenkins,queue=default,state=KILLED,trackingUrl=http://869e1f448cdd:8088/cluster/app/application_1553240811329_0004,appMasterHost=N/A,submitTime=1553240813597,startTime=1553240813604,launchTime=0,finishTime=1553240844372,finalStatus=KILLED,memorySeconds=0,vcoreSeconds=0,preemptedMemorySeconds=0,preemptedVcoreSeconds=0,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=  vCores:0>,applicationType=YARN,resourceSeconds=0 MB-seconds\, 0 
> vcore-seconds,preemptedResourceSeconds=0 MB-seconds\, 0 vcore-seconds
> {code}
> Testcase succeeds only when the seconds taken is above 30L.
> {code}
>  long totalTimeRun =
> (app4.getFinishTime() - app4.getSubmitTime()) / 1000;
>  Assert.assertTrue("Application killed before lifetime value",
> totalTimeRun > maxLifetime);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9272) Backport YARN-7738 for refreshing max allocation for multiple resource types

2019-03-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799513#comment-16799513
 ] 

Hadoop QA commented on YARN-9272:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-8200.branch3 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
15s{color} | {color:green} YARN-8200.branch3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
7s{color} | {color:green} YARN-8200.branch3 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} YARN-8200.branch3 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} YARN-8200.branch3 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} YARN-8200.branch3 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} YARN-8200.branch3 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 10s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 19 new + 445 unchanged - 0 fixed = 464 total (was 445) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 33s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:e402791 |
| JIRA Issue | YARN-9272 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963468/YARN-9272-YARN-8200.branch3.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 479b9016ab1d 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (YARN-9292) Implement logic to keep docker image consistent in application that uses :latest tag

2019-03-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799489#comment-16799489
 ] 

Hadoop QA commented on YARN-9292:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  8m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 29 unchanged - 3 fixed = 29 total (was 32) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
18s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m  
6s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9292 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963465/YARN-9292.006.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 21d9e13de863 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 43e421a |
| maven | 

[jira] [Commented] (YARN-9272) Backport YARN-7738 for refreshing max allocation for multiple resource types

2019-03-22 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799473#comment-16799473
 ] 

Jonathan Hung commented on YARN-9272:
-

Issue with TestReservationSystemWithRMHA was that it sets scheduler conf and 
passes it to RM,  which previously would get passed to scheduler on 
reinitialize, but {{loadNewConfiguration}} bypasses this.

This was fixed as part of YARN-6124, so 002 adds the changes in RMHATestBase 
from YARN-6124.

> Backport YARN-7738 for refreshing max allocation for multiple resource types
> 
>
> Key: YARN-9272
> URL: https://issues.apache.org/jira/browse/YARN-9272
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9272-YARN-8200.branch3.001.patch, 
> YARN-9272-YARN-8200.branch3.002.patch
>
>
> Need to port to YARN-8200.branch3 (for branch-3.0) and YARN-8200 (for 
> branch-2)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9272) Backport YARN-7738 for refreshing max allocation for multiple resource types

2019-03-22 Thread Jonathan Hung (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-9272:

Attachment: YARN-9272-YARN-8200.branch3.002.patch

> Backport YARN-7738 for refreshing max allocation for multiple resource types
> 
>
> Key: YARN-9272
> URL: https://issues.apache.org/jira/browse/YARN-9272
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9272-YARN-8200.branch3.001.patch, 
> YARN-9272-YARN-8200.branch3.002.patch
>
>
> Need to port to YARN-8200.branch3 (for branch-3.0) and YARN-8200 (for 
> branch-2)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9292) Implement logic to keep docker image consistent in application that uses :latest tag

2019-03-22 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799450#comment-16799450
 ] 

Eric Yang commented on YARN-9292:
-

Patch 006 fixed a bug where docker image is not properly URL encoded.

> Implement logic to keep docker image consistent in application that uses 
> :latest tag
> 
>
> Key: YARN-9292
> URL: https://issues.apache.org/jira/browse/YARN-9292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9292.001.patch, YARN-9292.002.patch, 
> YARN-9292.003.patch, YARN-9292.004.patch, YARN-9292.005.patch, 
> YARN-9292.006.patch
>
>
> Docker image with latest tag can run in YARN cluster without any validation 
> in node managers. If a image with latest tag is changed during containers 
> launch. It might produce inconsistent results between nodes. This is surfaced 
> toward end of development for YARN-9184 to keep docker image consistent 
> within a job. One of the ideas to keep :latest tag consistent for a job, is 
> to use docker image command to figure out the image id and use image id to 
> propagate to rest of the container requests. There are some challenges to 
> overcome:
>  # The latest tag does not exist on the node where first container starts. 
> The first container will need to download the latest image, and find image 
> ID. This can introduce lag time for other containers to start.
>  # If image id is used to start other container, container-executor may have 
> problems to check if the image is coming from a trusted source. Both image 
> name and ID must be supply through .cmd file to container-executor. However, 
> hacker can supply incorrect image id and defeat container-executor security 
> checks.
> If we can over come those challenges, it maybe possible to keep docker image 
> consistent with one application.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9292) Implement logic to keep docker image consistent in application that uses :latest tag

2019-03-22 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-9292:

Attachment: YARN-9292.006.patch

> Implement logic to keep docker image consistent in application that uses 
> :latest tag
> 
>
> Key: YARN-9292
> URL: https://issues.apache.org/jira/browse/YARN-9292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9292.001.patch, YARN-9292.002.patch, 
> YARN-9292.003.patch, YARN-9292.004.patch, YARN-9292.005.patch, 
> YARN-9292.006.patch
>
>
> Docker image with latest tag can run in YARN cluster without any validation 
> in node managers. If a image with latest tag is changed during containers 
> launch. It might produce inconsistent results between nodes. This is surfaced 
> toward end of development for YARN-9184 to keep docker image consistent 
> within a job. One of the ideas to keep :latest tag consistent for a job, is 
> to use docker image command to figure out the image id and use image id to 
> propagate to rest of the container requests. There are some challenges to 
> overcome:
>  # The latest tag does not exist on the node where first container starts. 
> The first container will need to download the latest image, and find image 
> ID. This can introduce lag time for other containers to start.
>  # If image id is used to start other container, container-executor may have 
> problems to check if the image is coming from a trusted source. Both image 
> name and ID must be supply through .cmd file to container-executor. However, 
> hacker can supply incorrect image id and defeat container-executor security 
> checks.
> If we can over come those challenges, it maybe possible to keep docker image 
> consistent with one application.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9292) Implement logic to keep docker image consistent in application that uses :latest tag

2019-03-22 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799442#comment-16799442
 ] 

Eric Yang edited comment on YARN-9292 at 3/22/19 11:43 PM:
---

[~csingh] the second command format option is missing single quotes around 
{{.RepoDigests}} or the image is locally built, doesn't have digest id.


was (Author: eyang):
[~csingh] the second command format option is missing single quotes around 
{{.RepoDigests}}.

> Implement logic to keep docker image consistent in application that uses 
> :latest tag
> 
>
> Key: YARN-9292
> URL: https://issues.apache.org/jira/browse/YARN-9292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9292.001.patch, YARN-9292.002.patch, 
> YARN-9292.003.patch, YARN-9292.004.patch, YARN-9292.005.patch
>
>
> Docker image with latest tag can run in YARN cluster without any validation 
> in node managers. If a image with latest tag is changed during containers 
> launch. It might produce inconsistent results between nodes. This is surfaced 
> toward end of development for YARN-9184 to keep docker image consistent 
> within a job. One of the ideas to keep :latest tag consistent for a job, is 
> to use docker image command to figure out the image id and use image id to 
> propagate to rest of the container requests. There are some challenges to 
> overcome:
>  # The latest tag does not exist on the node where first container starts. 
> The first container will need to download the latest image, and find image 
> ID. This can introduce lag time for other containers to start.
>  # If image id is used to start other container, container-executor may have 
> problems to check if the image is coming from a trusted source. Both image 
> name and ID must be supply through .cmd file to container-executor. However, 
> hacker can supply incorrect image id and defeat container-executor security 
> checks.
> If we can over come those challenges, it maybe possible to keep docker image 
> consistent with one application.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9292) Implement logic to keep docker image consistent in application that uses :latest tag

2019-03-22 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799442#comment-16799442
 ] 

Eric Yang commented on YARN-9292:
-

[~csingh] the second command format option is missing single quotes around 
{{.RepoDigests}}.

> Implement logic to keep docker image consistent in application that uses 
> :latest tag
> 
>
> Key: YARN-9292
> URL: https://issues.apache.org/jira/browse/YARN-9292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9292.001.patch, YARN-9292.002.patch, 
> YARN-9292.003.patch, YARN-9292.004.patch, YARN-9292.005.patch
>
>
> Docker image with latest tag can run in YARN cluster without any validation 
> in node managers. If a image with latest tag is changed during containers 
> launch. It might produce inconsistent results between nodes. This is surfaced 
> toward end of development for YARN-9184 to keep docker image consistent 
> within a job. One of the ideas to keep :latest tag consistent for a job, is 
> to use docker image command to figure out the image id and use image id to 
> propagate to rest of the container requests. There are some challenges to 
> overcome:
>  # The latest tag does not exist on the node where first container starts. 
> The first container will need to download the latest image, and find image 
> ID. This can introduce lag time for other containers to start.
>  # If image id is used to start other container, container-executor may have 
> problems to check if the image is coming from a trusted source. Both image 
> name and ID must be supply through .cmd file to container-executor. However, 
> hacker can supply incorrect image id and defeat container-executor security 
> checks.
> If we can over come those challenges, it maybe possible to keep docker image 
> consistent with one application.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9292) Implement logic to keep docker image consistent in application that uses :latest tag

2019-03-22 Thread Chandni Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799438#comment-16799438
 ] 

Chandni Singh commented on YARN-9292:
-

[~eyang] I have a hadoop-build-1000:latest locally
{code} docker images hadoop-build-1000:latest --format='{{json .}}' {code}
gives the below info 
{code} 
{"Containers":"N/A","CreatedAt":"2018-12-18 23:08:27 -0800 
PST","CreatedSince":"3 months 
ago","Digest":"\u003cnone\u003e","ID":"c9e7cc96aa61","Repository":"hadoop-build-1000","SharedSize":"N/A","Size":"2.01GB","Tag":"latest","UniqueSize":"N/A","VirtualSize":"2.013GB"}
{code}

However,
{code} docker image inspect hadoop-build-1000:latest --format={{.RepoDigests}}  
{code}
 doesn't return anything. 
The output of this command is 
{code}
[]
{code}



> Implement logic to keep docker image consistent in application that uses 
> :latest tag
> 
>
> Key: YARN-9292
> URL: https://issues.apache.org/jira/browse/YARN-9292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9292.001.patch, YARN-9292.002.patch, 
> YARN-9292.003.patch, YARN-9292.004.patch, YARN-9292.005.patch
>
>
> Docker image with latest tag can run in YARN cluster without any validation 
> in node managers. If a image with latest tag is changed during containers 
> launch. It might produce inconsistent results between nodes. This is surfaced 
> toward end of development for YARN-9184 to keep docker image consistent 
> within a job. One of the ideas to keep :latest tag consistent for a job, is 
> to use docker image command to figure out the image id and use image id to 
> propagate to rest of the container requests. There are some challenges to 
> overcome:
>  # The latest tag does not exist on the node where first container starts. 
> The first container will need to download the latest image, and find image 
> ID. This can introduce lag time for other containers to start.
>  # If image id is used to start other container, container-executor may have 
> problems to check if the image is coming from a trusted source. Both image 
> name and ID must be supply through .cmd file to container-executor. However, 
> hacker can supply incorrect image id and defeat container-executor security 
> checks.
> If we can over come those challenges, it maybe possible to keep docker image 
> consistent with one application.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7129) Application Catalog for YARN applications

2019-03-22 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7129:

Attachment: YARN-7129.034.patch

> Application Catalog for YARN applications
> -
>
> Key: YARN-7129
> URL: https://issues.apache.org/jira/browse/YARN-7129
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN Appstore.pdf, YARN-7129.001.patch, 
> YARN-7129.002.patch, YARN-7129.003.patch, YARN-7129.004.patch, 
> YARN-7129.005.patch, YARN-7129.006.patch, YARN-7129.007.patch, 
> YARN-7129.008.patch, YARN-7129.009.patch, YARN-7129.010.patch, 
> YARN-7129.011.patch, YARN-7129.012.patch, YARN-7129.013.patch, 
> YARN-7129.014.patch, YARN-7129.015.patch, YARN-7129.016.patch, 
> YARN-7129.017.patch, YARN-7129.018.patch, YARN-7129.019.patch, 
> YARN-7129.020.patch, YARN-7129.021.patch, YARN-7129.022.patch, 
> YARN-7129.023.patch, YARN-7129.024.patch, YARN-7129.025.patch, 
> YARN-7129.026.patch, YARN-7129.027.patch, YARN-7129.028.patch, 
> YARN-7129.029.patch, YARN-7129.030.patch, YARN-7129.031.patch, 
> YARN-7129.032.patch, YARN-7129.033.patch, YARN-7129.034.patch
>
>
> YARN native services provides web services API to improve usability of 
> application deployment on Hadoop using collection of docker images.  It would 
> be nice to have an application catalog system which provides an editorial and 
> search interface for YARN applications.  This improves usability of YARN for 
> manage the life cycle of applications.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7129) Application Catalog for YARN applications

2019-03-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799427#comment-16799427
 ] 

Hadoop QA commented on YARN-7129:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 10s{color} | {color:orange} root: The patch generated 3 new + 4 unchanged - 
0 fixed = 7 total (was 4) {color} |
| {color:green}+1{color} | {color:green} hadolint {color} | {color:green}  0m  
0s{color} | {color:green} There were no new hadolint issues. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 11m 
24s{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:orange}-0{color} | {color:orange} shelldocs {color} | {color:orange}  
0m 20s{color} | {color:orange} The patch generated 422 new + 104 unchanged - 0 
fixed = 526 total (was 104) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
15s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-docker
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
49s{color} | {color:green} the patch passed {color} |
|| || || || 

[jira] [Commented] (YARN-8551) Build Common module for MaWo application

2019-03-22 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799332#comment-16799332
 ] 

Eric Yang commented on YARN-8551:
-

[~yeshavora]
 # In pom.xml, it has reference parent pom, and the version number is ideally 
same as the parent.  We can remove version tag from 
hadoop-yarn-applications-mawo/pom.xml.
 # YARN project has moved to using slf4j instead of log4j 1.x to fix some 
deadlock issues in high stressed system.  Can we remove log4j and 
commons-logging?
 # apache-rat-plugin is not indented correctly.  We might be able to remove the 
plugin from this pom.xml because Yetus produces false positive report rather 
than the code.
 # bin.xml containers comment out section for hello.  This can be removed.
 # There is some javadocs errors in AbstractTask.java.
 # In hadoop-yarn-applications, hadoop-yarn-applications-mawo is not listed as 
a submodule.  This cause apache-rat-plugin to assume 
hadoop-yarn-applications-mawo as a directory, and included 
hadoop-yarn-applications-mawo/target to be part of the license check 
incorrectly.

> Build Common module for MaWo application
> 
>
> Key: YARN-8551
> URL: https://issues.apache.org/jira/browse/YARN-8551
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yesha Vora
>Assignee: Yesha Vora
>Priority: Major
> Attachments: YARN-8551.001.patch, YARN-8551.0010.patch, 
> YARN-8551.0011.patch, YARN-8551.0012.patch, YARN-8551.0013.patch, 
> YARN-8551.0014.patch, YARN-8551.002.patch, YARN-8551.003.patch, 
> YARN-8551.004.patch, YARN-8551.005.patch, YARN-8551.006.patch, 
> YARN-8551.007.patch, YARN-8551.008.patch, YARN-8551.009.patch
>
>
> Build Common module for MaWo application.
>  This module should include defination of Task. A Task should contain
>  * TaskID
>  * Task Command
>  * Task Environment
>  * Task Timeout
>  * Task Type
>  ** Simple Task
>  *** Its a single Task
>  ** Composite Task
>  *** Its a composition of multiple simple tasks
>  ** Teardown Task
>  *** Its a last task to be executed after a job is finished
>  ** Null Task
>  *** Its a null task



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8551) Build Common module for MaWo application

2019-03-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799325#comment-16799325
 ] 

Hadoop QA commented on YARN-8551:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-yarn-applications-mawo in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
31s{color} | {color:red} The patch generated 5 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-8551 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963190/YARN-8551.0014.patch |
| Optional Tests |  dupname  asflicense  findbugs  xml  compile  javac  javadoc 
 mvninstall  mvnsite  unit  shadedclient  checkstyle  |
| uname | Linux 4ed05e3dc543 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 509b20b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23789/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/23789/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 340 (vs. ulimit of 

[jira] [Commented] (YARN-7129) Application Catalog for YARN applications

2019-03-22 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799282#comment-16799282
 ] 

Eric Yang commented on YARN-7129:
-

Patch 033 includes [~ste...@apache.org] and [~jeagles] feedbacks.

Time benchmark for building application catalog on a 2015 i7 MacBook Pro with 
Redhat 7 as guest vm.
Benchmark command in hadoop-yarn-applications-catalog folder:

{code}
mvn clean package -Pdocker
{code}

Fresh checkout with no docker cache:
{code}
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] YARN Application Catalog ... SUCCESS [  1.090 s]
[INFO] YARN Application Catalog Webapp  SUCCESS [ 23.444 s]
[INFO] YARN Application Catalog Docker Image .. SUCCESS [02:19 min]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 02:44 min
[INFO] Finished at: 2019-03-22T15:16:19-04:00
[INFO] Final Memory: 108M/906M
[INFO] 
{code}

After maven and docker cache are built once:
{code}
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] YARN Application Catalog ... SUCCESS [  1.268 s]
[INFO] YARN Application Catalog Webapp  SUCCESS [ 24.019 s]
[INFO] YARN Application Catalog Docker Image .. SUCCESS [ 32.326 s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 58.348 s
[INFO] Finished at: 2019-03-22T14:31:19-04:00
[INFO] Final Memory: 106M/981M
[INFO] 
{code}

Skipping docker build (mvn clean package):
{code}
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] YARN Application Catalog ... SUCCESS [  1.151 s]
[INFO] YARN Application Catalog Webapp  SUCCESS [ 25.091 s]
[INFO] YARN Application Catalog Docker Image .. SUCCESS [  0.144 s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 27.074 s
[INFO] Finished at: 2019-03-22T14:35:18-04:00
[INFO] Final Memory: 87M/917M
[INFO] 
{code}

Base on the result, it's good evidence to say that if developer can be patient 
on fresh build once.  Having docker build inline does not introduce much 
overhead for developer that needs to rapidly reiterate building process.

When using profile to generate artifact, some part of code might not be 
exercised daily, and hide bugs from surfacing until release time.  I hope 
Hadoop community will consider to make docker build process inline some day.

> Application Catalog for YARN applications
> -
>
> Key: YARN-7129
> URL: https://issues.apache.org/jira/browse/YARN-7129
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN Appstore.pdf, YARN-7129.001.patch, 
> YARN-7129.002.patch, YARN-7129.003.patch, YARN-7129.004.patch, 
> YARN-7129.005.patch, YARN-7129.006.patch, YARN-7129.007.patch, 
> YARN-7129.008.patch, YARN-7129.009.patch, YARN-7129.010.patch, 
> YARN-7129.011.patch, YARN-7129.012.patch, YARN-7129.013.patch, 
> YARN-7129.014.patch, YARN-7129.015.patch, YARN-7129.016.patch, 
> YARN-7129.017.patch, YARN-7129.018.patch, YARN-7129.019.patch, 
> YARN-7129.020.patch, YARN-7129.021.patch, YARN-7129.022.patch, 
> YARN-7129.023.patch, YARN-7129.024.patch, YARN-7129.025.patch, 
> YARN-7129.026.patch, YARN-7129.027.patch, YARN-7129.028.patch, 
> YARN-7129.029.patch, YARN-7129.030.patch, YARN-7129.031.patch, 
> YARN-7129.032.patch, YARN-7129.033.patch
>
>
> YARN native services provides web services API to improve usability of 
> application deployment on Hadoop using collection of docker images.  It would 
> be nice to have an application catalog system which provides an editorial and 
> search interface for YARN applications.  This improves usability of YARN for 
> manage the life cycle of applications.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (YARN-9404) TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails intermittent

2019-03-22 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799269#comment-16799269
 ] 

Hudson commented on YARN-9404:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16265 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16265/])
YARN-9404. TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor 
(gifuma: rev 509b20b292465ea0c8a2a0908995421e29e71da4)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestApplicationLifetimeMonitor.java


> TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails 
> intermittent
> 
>
> Key: YARN-9404
> URL: https://issues.apache.org/jira/browse/YARN-9404
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: resourcemanager
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9404-001.patch
>
>
> TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails 
> intermittent. 
> {code}
> [ERROR] 
> testApplicationLifetimeMonitor[0](org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor)
>  Time elapsed: 34.75 s <<< FAILURE! java.lang.AssertionError: Application 
> killed before lifetime value at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor(TestApplicationLifetimeMonitor.java:209)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.lang.Thread.run(Thread.java:748)
> {code}
> As per testcase logs, submittime is 1553240813597 and finishtime is 
> 1553240844372. The testcase does (finishtime - submittime) / 1000 = 30775 / 
> 1000 = 30 and loses the decimal, 775 ms.
> {code}
> 2019-03-22 07:47:24,357 INFO  [Ping Checker] util.AbstractLivelinessMonitor 
> (AbstractLivelinessMonitor.java:run(149)) - 
> Expired:application_1553240811329_0004_LIFETIME Timed out after 0 secs
> 2019-03-22 07:47:24,384 INFO  [AsyncDispatcher event handler] 
> resourcemanager.RMAppManager$ApplicationSummary 
> (RMAppManager.java:logAppSummary(219)) - 
> appId=application_1553240811329_0004,name=,user=jenkins,queue=default,state=KILLED,trackingUrl=http://869e1f448cdd:8088/cluster/app/application_1553240811329_0004,appMasterHost=N/A,submitTime=1553240813597,startTime=1553240813604,launchTime=0,finishTime=1553240844372,finalStatus=KILLED,memorySeconds=0,vcoreSeconds=0,preemptedMemorySeconds=0,preemptedVcoreSeconds=0,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=  vCores:0>,applicationType=YARN,resourceSeconds=0 MB-seconds\, 0 
> vcore-seconds,preemptedResourceSeconds=0 MB-seconds\, 0 vcore-seconds
> {code}
> Testcase succeeds only when the seconds taken is above 30L.
> {code}
>  long totalTimeRun =
> (app4.getFinishTime() - app4.getSubmitTime()) / 1000;
>  Assert.assertTrue("Application killed before lifetime value",
> totalTimeRun > maxLifetime);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9404) TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails intermittent

2019-03-22 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-9404:
---
Fix Version/s: 3.3.0

> TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails 
> intermittent
> 
>
> Key: YARN-9404
> URL: https://issues.apache.org/jira/browse/YARN-9404
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: resourcemanager
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9404-001.patch
>
>
> TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails 
> intermittent. 
> {code}
> [ERROR] 
> testApplicationLifetimeMonitor[0](org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor)
>  Time elapsed: 34.75 s <<< FAILURE! java.lang.AssertionError: Application 
> killed before lifetime value at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor(TestApplicationLifetimeMonitor.java:209)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.lang.Thread.run(Thread.java:748)
> {code}
> As per testcase logs, submittime is 1553240813597 and finishtime is 
> 1553240844372. The testcase does (finishtime - submittime) / 1000 = 30775 / 
> 1000 = 30 and loses the decimal, 775 ms.
> {code}
> 2019-03-22 07:47:24,357 INFO  [Ping Checker] util.AbstractLivelinessMonitor 
> (AbstractLivelinessMonitor.java:run(149)) - 
> Expired:application_1553240811329_0004_LIFETIME Timed out after 0 secs
> 2019-03-22 07:47:24,384 INFO  [AsyncDispatcher event handler] 
> resourcemanager.RMAppManager$ApplicationSummary 
> (RMAppManager.java:logAppSummary(219)) - 
> appId=application_1553240811329_0004,name=,user=jenkins,queue=default,state=KILLED,trackingUrl=http://869e1f448cdd:8088/cluster/app/application_1553240811329_0004,appMasterHost=N/A,submitTime=1553240813597,startTime=1553240813604,launchTime=0,finishTime=1553240844372,finalStatus=KILLED,memorySeconds=0,vcoreSeconds=0,preemptedMemorySeconds=0,preemptedVcoreSeconds=0,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=  vCores:0>,applicationType=YARN,resourceSeconds=0 MB-seconds\, 0 
> vcore-seconds,preemptedResourceSeconds=0 MB-seconds\, 0 vcore-seconds
> {code}
> Testcase succeeds only when the seconds taken is above 30L.
> {code}
>  long totalTimeRun =
> (app4.getFinishTime() - app4.getSubmitTime()) / 1000;
>  Assert.assertTrue("Application killed before lifetime value",
> totalTimeRun > maxLifetime);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9404) TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails intermittent

2019-03-22 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799261#comment-16799261
 ] 

Giovanni Matteo Fumarola commented on YARN-9404:


Committed to trunk.
Thanks [~Prabhu Joseph]

> TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails 
> intermittent
> 
>
> Key: YARN-9404
> URL: https://issues.apache.org/jira/browse/YARN-9404
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: resourcemanager
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9404-001.patch
>
>
> TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails 
> intermittent. 
> {code}
> [ERROR] 
> testApplicationLifetimeMonitor[0](org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor)
>  Time elapsed: 34.75 s <<< FAILURE! java.lang.AssertionError: Application 
> killed before lifetime value at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor(TestApplicationLifetimeMonitor.java:209)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.lang.Thread.run(Thread.java:748)
> {code}
> As per testcase logs, submittime is 1553240813597 and finishtime is 
> 1553240844372. The testcase does (finishtime - submittime) / 1000 = 30775 / 
> 1000 = 30 and loses the decimal, 775 ms.
> {code}
> 2019-03-22 07:47:24,357 INFO  [Ping Checker] util.AbstractLivelinessMonitor 
> (AbstractLivelinessMonitor.java:run(149)) - 
> Expired:application_1553240811329_0004_LIFETIME Timed out after 0 secs
> 2019-03-22 07:47:24,384 INFO  [AsyncDispatcher event handler] 
> resourcemanager.RMAppManager$ApplicationSummary 
> (RMAppManager.java:logAppSummary(219)) - 
> appId=application_1553240811329_0004,name=,user=jenkins,queue=default,state=KILLED,trackingUrl=http://869e1f448cdd:8088/cluster/app/application_1553240811329_0004,appMasterHost=N/A,submitTime=1553240813597,startTime=1553240813604,launchTime=0,finishTime=1553240844372,finalStatus=KILLED,memorySeconds=0,vcoreSeconds=0,preemptedMemorySeconds=0,preemptedVcoreSeconds=0,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=  vCores:0>,applicationType=YARN,resourceSeconds=0 MB-seconds\, 0 
> vcore-seconds,preemptedResourceSeconds=0 MB-seconds\, 0 vcore-seconds
> {code}
> Testcase succeeds only when the seconds taken is above 30L.
> {code}
>  long totalTimeRun =
> (app4.getFinishTime() - app4.getSubmitTime()) / 1000;
>  Assert.assertTrue("Application killed before lifetime value",
> totalTimeRun > maxLifetime);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7129) Application Catalog for YARN applications

2019-03-22 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7129:

Attachment: YARN-7129.033.patch

> Application Catalog for YARN applications
> -
>
> Key: YARN-7129
> URL: https://issues.apache.org/jira/browse/YARN-7129
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN Appstore.pdf, YARN-7129.001.patch, 
> YARN-7129.002.patch, YARN-7129.003.patch, YARN-7129.004.patch, 
> YARN-7129.005.patch, YARN-7129.006.patch, YARN-7129.007.patch, 
> YARN-7129.008.patch, YARN-7129.009.patch, YARN-7129.010.patch, 
> YARN-7129.011.patch, YARN-7129.012.patch, YARN-7129.013.patch, 
> YARN-7129.014.patch, YARN-7129.015.patch, YARN-7129.016.patch, 
> YARN-7129.017.patch, YARN-7129.018.patch, YARN-7129.019.patch, 
> YARN-7129.020.patch, YARN-7129.021.patch, YARN-7129.022.patch, 
> YARN-7129.023.patch, YARN-7129.024.patch, YARN-7129.025.patch, 
> YARN-7129.026.patch, YARN-7129.027.patch, YARN-7129.028.patch, 
> YARN-7129.029.patch, YARN-7129.030.patch, YARN-7129.031.patch, 
> YARN-7129.032.patch, YARN-7129.033.patch
>
>
> YARN native services provides web services API to improve usability of 
> application deployment on Hadoop using collection of docker images.  It would 
> be nice to have an application catalog system which provides an editorial and 
> search interface for YARN applications.  This improves usability of YARN for 
> manage the life cycle of applications.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9404) TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails intermittent

2019-03-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799225#comment-16799225
 ] 

Hadoop QA commented on YARN-9404:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 81m 
53s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9404 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963420/YARN-9404-001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c08dc9ec36d7 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ae2eb2d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23787/testReport/ |
| Max. process+thread count | 956 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/23787/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Comment Edited] (YARN-9404) TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails intermittent

2019-03-22 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799222#comment-16799222
 ] 

Giovanni Matteo Fumarola edited comment on YARN-9404 at 3/22/19 5:57 PM:
-

LGTM +1. Thanks [~Prabhu Joseph]
 Waiting on Yetus.


was (Author: giovanni.fumarola):
LGTM +1.
Waiting on Yetus.

> TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails 
> intermittent
> 
>
> Key: YARN-9404
> URL: https://issues.apache.org/jira/browse/YARN-9404
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: resourcemanager
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-9404-001.patch
>
>
> TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails 
> intermittent. 
> {code}
> [ERROR] 
> testApplicationLifetimeMonitor[0](org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor)
>  Time elapsed: 34.75 s <<< FAILURE! java.lang.AssertionError: Application 
> killed before lifetime value at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor(TestApplicationLifetimeMonitor.java:209)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.lang.Thread.run(Thread.java:748)
> {code}
> As per testcase logs, submittime is 1553240813597 and finishtime is 
> 1553240844372. The testcase does (finishtime - submittime) / 1000 = 30775 / 
> 1000 = 30 and loses the decimal, 775 ms.
> {code}
> 2019-03-22 07:47:24,357 INFO  [Ping Checker] util.AbstractLivelinessMonitor 
> (AbstractLivelinessMonitor.java:run(149)) - 
> Expired:application_1553240811329_0004_LIFETIME Timed out after 0 secs
> 2019-03-22 07:47:24,384 INFO  [AsyncDispatcher event handler] 
> resourcemanager.RMAppManager$ApplicationSummary 
> (RMAppManager.java:logAppSummary(219)) - 
> appId=application_1553240811329_0004,name=,user=jenkins,queue=default,state=KILLED,trackingUrl=http://869e1f448cdd:8088/cluster/app/application_1553240811329_0004,appMasterHost=N/A,submitTime=1553240813597,startTime=1553240813604,launchTime=0,finishTime=1553240844372,finalStatus=KILLED,memorySeconds=0,vcoreSeconds=0,preemptedMemorySeconds=0,preemptedVcoreSeconds=0,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=  vCores:0>,applicationType=YARN,resourceSeconds=0 MB-seconds\, 0 
> vcore-seconds,preemptedResourceSeconds=0 MB-seconds\, 0 vcore-seconds
> {code}
> Testcase succeeds only when the seconds taken is above 30L.
> {code}
>  long totalTimeRun =
> (app4.getFinishTime() - app4.getSubmitTime()) / 1000;
>  Assert.assertTrue("Application killed before lifetime value",
> totalTimeRun > maxLifetime);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9404) TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails intermittent

2019-03-22 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799222#comment-16799222
 ] 

Giovanni Matteo Fumarola commented on YARN-9404:


LGTM +1.
Waiting on Yetus.

> TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails 
> intermittent
> 
>
> Key: YARN-9404
> URL: https://issues.apache.org/jira/browse/YARN-9404
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: resourcemanager
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-9404-001.patch
>
>
> TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails 
> intermittent. 
> {code}
> [ERROR] 
> testApplicationLifetimeMonitor[0](org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor)
>  Time elapsed: 34.75 s <<< FAILURE! java.lang.AssertionError: Application 
> killed before lifetime value at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor(TestApplicationLifetimeMonitor.java:209)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.lang.Thread.run(Thread.java:748)
> {code}
> As per testcase logs, submittime is 1553240813597 and finishtime is 
> 1553240844372. The testcase does (finishtime - submittime) / 1000 = 30775 / 
> 1000 = 30 and loses the decimal, 775 ms.
> {code}
> 2019-03-22 07:47:24,357 INFO  [Ping Checker] util.AbstractLivelinessMonitor 
> (AbstractLivelinessMonitor.java:run(149)) - 
> Expired:application_1553240811329_0004_LIFETIME Timed out after 0 secs
> 2019-03-22 07:47:24,384 INFO  [AsyncDispatcher event handler] 
> resourcemanager.RMAppManager$ApplicationSummary 
> (RMAppManager.java:logAppSummary(219)) - 
> appId=application_1553240811329_0004,name=,user=jenkins,queue=default,state=KILLED,trackingUrl=http://869e1f448cdd:8088/cluster/app/application_1553240811329_0004,appMasterHost=N/A,submitTime=1553240813597,startTime=1553240813604,launchTime=0,finishTime=1553240844372,finalStatus=KILLED,memorySeconds=0,vcoreSeconds=0,preemptedMemorySeconds=0,preemptedVcoreSeconds=0,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=  vCores:0>,applicationType=YARN,resourceSeconds=0 MB-seconds\, 0 
> vcore-seconds,preemptedResourceSeconds=0 MB-seconds\, 0 vcore-seconds
> {code}
> Testcase succeeds only when the seconds taken is above 30L.
> {code}
>  long totalTimeRun =
> (app4.getFinishTime() - app4.getSubmitTime()) / 1000;
>  Assert.assertTrue("Application killed before lifetime value",
> totalTimeRun > maxLifetime);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7129) Application Catalog for YARN applications

2019-03-22 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799148#comment-16799148
 ] 

Eric Yang commented on YARN-7129:
-

[~ste...@apache.org]
* {quote}versions of artifacts in the webapp pom should be taken from the 
hadoop project uber-pom; so maintained in sync. mockito, is that central pom 
already, for example{quote}
* {quote} same for all the maven plugin versions. If they are new plugins, add 
the property to the hadoop-project jar and then reference it.{quote}

Thanks for the input, will clean up accordingly.

{quote}Is there a way to have some example which doesn't add large amounts of 
binary data? Because its going to make our repo even bigger, increase the time 
it takes to switch across branches slower, etc -stuff I do do regularly. Git 
isn't a place to keep binaries{quote}

Hwx Ux team was pushing for consistent theme in Hadoop related projects a 
couple years ago.  Google Roboto font face was chosen by Ux team.  Engineers 
always push back on this kind of thing.  In this case, I will make this part as 
nodejs download.  It will reduce the patch size by 400kb at least.

> Application Catalog for YARN applications
> -
>
> Key: YARN-7129
> URL: https://issues.apache.org/jira/browse/YARN-7129
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN Appstore.pdf, YARN-7129.001.patch, 
> YARN-7129.002.patch, YARN-7129.003.patch, YARN-7129.004.patch, 
> YARN-7129.005.patch, YARN-7129.006.patch, YARN-7129.007.patch, 
> YARN-7129.008.patch, YARN-7129.009.patch, YARN-7129.010.patch, 
> YARN-7129.011.patch, YARN-7129.012.patch, YARN-7129.013.patch, 
> YARN-7129.014.patch, YARN-7129.015.patch, YARN-7129.016.patch, 
> YARN-7129.017.patch, YARN-7129.018.patch, YARN-7129.019.patch, 
> YARN-7129.020.patch, YARN-7129.021.patch, YARN-7129.022.patch, 
> YARN-7129.023.patch, YARN-7129.024.patch, YARN-7129.025.patch, 
> YARN-7129.026.patch, YARN-7129.027.patch, YARN-7129.028.patch, 
> YARN-7129.029.patch, YARN-7129.030.patch, YARN-7129.031.patch, 
> YARN-7129.032.patch
>
>
> YARN native services provides web services API to improve usability of 
> application deployment on Hadoop using collection of docker images.  It would 
> be nice to have an application catalog system which provides an editorial and 
> search interface for YARN applications.  This improves usability of YARN for 
> manage the life cycle of applications.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9268) General improvements in FpgaDevice

2019-03-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799136#comment-16799136
 ] 

Hadoop QA commented on YARN-9268:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 43 unchanged - 7 fixed = 43 total (was 50) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 54s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.linux.resources.fpga.TestFpgaResourceHandler
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9268 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963414/YARN-9268-006.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2781185879dc 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ae2eb2d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/23786/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23786/testReport/ |
| Max. process+thread count | 446 (vs. ulimit of 

[jira] [Commented] (YARN-8967) Change FairScheduler to use PlacementRule interface

2019-03-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799129#comment-16799129
 ] 

Hadoop QA commented on YARN-8967:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 16 unchanged - 1 fixed = 16 total (was 17) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 339 unchanged - 67 fixed = 339 total (was 406) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 82m 
28s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-8967 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963406/YARN-8967.011.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4d37c784318d 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ae2eb2d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23785/testReport/ |
| Max. process+thread count | 917 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Updated] (YARN-9404) TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails intermittent

2019-03-22 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9404:

Attachment: YARN-9404-001.patch

> TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails 
> intermittent
> 
>
> Key: YARN-9404
> URL: https://issues.apache.org/jira/browse/YARN-9404
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: resourcemanager
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-9404-001.patch
>
>
> TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails 
> intermittent. 
> {code}
> [ERROR] 
> testApplicationLifetimeMonitor[0](org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor)
>  Time elapsed: 34.75 s <<< FAILURE! java.lang.AssertionError: Application 
> killed before lifetime value at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor(TestApplicationLifetimeMonitor.java:209)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.lang.Thread.run(Thread.java:748)
> {code}
> As per testcase logs, submittime is 1553240813597 and finishtime is 
> 1553240844372. The testcase does (finishtime - submittime) / 1000 = 30775 / 
> 1000 = 30 and loses the decimal, 775 ms.
> {code}
> 2019-03-22 07:47:24,357 INFO  [Ping Checker] util.AbstractLivelinessMonitor 
> (AbstractLivelinessMonitor.java:run(149)) - 
> Expired:application_1553240811329_0004_LIFETIME Timed out after 0 secs
> 2019-03-22 07:47:24,384 INFO  [AsyncDispatcher event handler] 
> resourcemanager.RMAppManager$ApplicationSummary 
> (RMAppManager.java:logAppSummary(219)) - 
> appId=application_1553240811329_0004,name=,user=jenkins,queue=default,state=KILLED,trackingUrl=http://869e1f448cdd:8088/cluster/app/application_1553240811329_0004,appMasterHost=N/A,submitTime=1553240813597,startTime=1553240813604,launchTime=0,finishTime=1553240844372,finalStatus=KILLED,memorySeconds=0,vcoreSeconds=0,preemptedMemorySeconds=0,preemptedVcoreSeconds=0,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=  vCores:0>,applicationType=YARN,resourceSeconds=0 MB-seconds\, 0 
> vcore-seconds,preemptedResourceSeconds=0 MB-seconds\, 0 vcore-seconds
> {code}
> Testcase succeeds only when the seconds taken is above 30L.
> {code}
>  long totalTimeRun =
> (app4.getFinishTime() - app4.getSubmitTime()) / 1000;
>  Assert.assertTrue("Application killed before lifetime value",
> totalTimeRun > maxLifetime);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9404) TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails intermittent

2019-03-22 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9404:

Component/s: resourcemanager

> TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails 
> intermittent
> 
>
> Key: YARN-9404
> URL: https://issues.apache.org/jira/browse/YARN-9404
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: resourcemanager
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>
> TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails 
> intermittent. 
> {code}
> [ERROR] 
> testApplicationLifetimeMonitor[0](org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor)
>  Time elapsed: 34.75 s <<< FAILURE! java.lang.AssertionError: Application 
> killed before lifetime value at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor(TestApplicationLifetimeMonitor.java:209)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.lang.Thread.run(Thread.java:748)
> {code}
> As per testcase logs, submittime is 1553240813597 and finishtime is 
> 1553240844372. The testcase does (finishtime - submittime) / 1000 = 30775 / 
> 1000 = 30 and loses the decimal, 775 ms.
> {code}
> 2019-03-22 07:47:24,357 INFO  [Ping Checker] util.AbstractLivelinessMonitor 
> (AbstractLivelinessMonitor.java:run(149)) - 
> Expired:application_1553240811329_0004_LIFETIME Timed out after 0 secs
> 2019-03-22 07:47:24,384 INFO  [AsyncDispatcher event handler] 
> resourcemanager.RMAppManager$ApplicationSummary 
> (RMAppManager.java:logAppSummary(219)) - 
> appId=application_1553240811329_0004,name=,user=jenkins,queue=default,state=KILLED,trackingUrl=http://869e1f448cdd:8088/cluster/app/application_1553240811329_0004,appMasterHost=N/A,submitTime=1553240813597,startTime=1553240813604,launchTime=0,finishTime=1553240844372,finalStatus=KILLED,memorySeconds=0,vcoreSeconds=0,preemptedMemorySeconds=0,preemptedVcoreSeconds=0,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=  vCores:0>,applicationType=YARN,resourceSeconds=0 MB-seconds\, 0 
> vcore-seconds,preemptedResourceSeconds=0 MB-seconds\, 0 vcore-seconds
> {code}
> Testcase succeeds only when the seconds taken is above 30L.
> {code}
>  long totalTimeRun =
> (app4.getFinishTime() - app4.getSubmitTime()) / 1000;
>  Assert.assertTrue("Application killed before lifetime value",
> totalTimeRun > maxLifetime);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9404) TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails intermittent

2019-03-22 Thread Prabhu Joseph (JIRA)
Prabhu Joseph created YARN-9404:
---

 Summary: 
TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails intermittent
 Key: YARN-9404
 URL: https://issues.apache.org/jira/browse/YARN-9404
 Project: Hadoop YARN
  Issue Type: Test
Affects Versions: 3.2.0
Reporter: Prabhu Joseph
Assignee: Prabhu Joseph


TestApplicationLifetimeMonitor#testApplicationLifetimeMonitor fails 
intermittent. 

{code}
[ERROR] 
testApplicationLifetimeMonitor[0](org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor)
 Time elapsed: 34.75 s <<< FAILURE! java.lang.AssertionError: Application 
killed before lifetime value at org.junit.Assert.fail(Assert.java:88) at 
org.junit.Assert.assertTrue(Assert.java:41) at 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor(TestApplicationLifetimeMonitor.java:209)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
java.lang.Thread.run(Thread.java:748)
{code}

As per testcase logs, submittime is 1553240813597 and finishtime is 
1553240844372. The testcase does (finishtime - submittime) / 1000 = 30775 / 
1000 = 30 and loses the decimal, 775 ms.

{code}
2019-03-22 07:47:24,357 INFO  [Ping Checker] util.AbstractLivelinessMonitor 
(AbstractLivelinessMonitor.java:run(149)) - 
Expired:application_1553240811329_0004_LIFETIME Timed out after 0 secs

2019-03-22 07:47:24,384 INFO  [AsyncDispatcher event handler] 
resourcemanager.RMAppManager$ApplicationSummary 
(RMAppManager.java:logAppSummary(219)) - 
appId=application_1553240811329_0004,name=,user=jenkins,queue=default,state=KILLED,trackingUrl=http://869e1f448cdd:8088/cluster/app/application_1553240811329_0004,appMasterHost=N/A,submitTime=1553240813597,startTime=1553240813604,launchTime=0,finishTime=1553240844372,finalStatus=KILLED,memorySeconds=0,vcoreSeconds=0,preemptedMemorySeconds=0,preemptedVcoreSeconds=0,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=,applicationType=YARN,resourceSeconds=0 MB-seconds\, 0 
vcore-seconds,preemptedResourceSeconds=0 MB-seconds\, 0 vcore-seconds
{code}

Testcase succeeds only when the seconds taken is above 30L.

{code}
 long totalTimeRun =
(app4.getFinishTime() - app4.getSubmitTime()) / 1000;
 Assert.assertTrue("Application killed before lifetime value",
totalTimeRun > maxLifetime);
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9353) TestNMWebFilter should be renamed

2019-03-22 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799087#comment-16799087
 ] 

Adam Antal commented on YARN-9353:
--

Hi [~smileLee], if it is your first contribution, you have probably not yet 
added to the yarn project yet. You can't be assigned this project until that. 
Could you help us out [~tangzhankun] or [~sunilg]?

> TestNMWebFilter should be renamed
> -
>
> Key: YARN-9353
> URL: https://issues.apache.org/jira/browse/YARN-9353
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Priority: Trivial
>  Labels: newbie, newbie++
>
> TestNMWebFilter should be renamed to should be renamed to NMWebAppFilter, as 
> there is no class named NMWebFilter. The javadoc of the class is also 
> outdated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9353) TestNMWebFilter should be renamed

2019-03-22 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799087#comment-16799087
 ] 

Adam Antal edited comment on YARN-9353 at 3/22/19 3:12 PM:
---

Hi [~smileLee], if it is your first contribution, you have probably not yet 
added to the yarn project. You can't be assigned this project until that. Could 
you help us out [~tangzhankun] or [~sunilg]?


was (Author: adam.antal):
Hi [~smileLee], if it is your first contribution, you have probably not yet 
added to the yarn project yet. You can't be assigned this project until that. 
Could you help us out [~tangzhankun] or [~sunilg]?

> TestNMWebFilter should be renamed
> -
>
> Key: YARN-9353
> URL: https://issues.apache.org/jira/browse/YARN-9353
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Priority: Trivial
>  Labels: newbie, newbie++
>
> TestNMWebFilter should be renamed to should be renamed to NMWebAppFilter, as 
> there is no class named NMWebFilter. The javadoc of the class is also 
> outdated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9268) General improvements in FpgaDevice

2019-03-22 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799070#comment-16799070
 ] 

Peter Bacsko commented on YARN-9268:


[~devaraj.k] thanks for the comments.

1. "aliasDevName is used in hashCode() but not in equals()" --> good catch, 
it's a mistake

2. "There are some fields not used in hashCode() and equals(), don't we need to 
include here?" --> I believe you refer to ipid and aocxHash. Those are mutable 
fields so they should be skipped.

3. Typo fixed

4. "FpgaDevice" reference fixed

> General improvements in FpgaDevice
> --
>
> Key: YARN-9268
> URL: https://issues.apache.org/jira/browse/YARN-9268
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9268-001.patch, YARN-9268-002.patch, 
> YARN-9268-003.patch, YARN-9268-004.patch, YARN-9268-005.patch, 
> YARN-9268-006.patch
>
>
> Need to fix the following in the class {{FpgaDevice}}:
>  * It implements {{Comparable}}, but returns 0 in every case. There is no 
> natural ordering among FPGA devices, perhaps "acl0" comes before "acl1", but 
> this seems too forced and unnecessary.We think this class should not 
> implement {{Comparable}} at all, at least not like that.
>  * Stores unnecessary fields: devName, busNum, temperature, power usage. For 
> one, these are never needed in the code. Secondly, temp and power usage 
> changes constantly. It's pointless to store these in this POJO.
>  * {{serialVersionUID}} is 1L - let's generate a number for this
>  * Use {{int}} instead of {{Integer}} - don't allow nulls. If major/minor 
> uniquely identifies the card, then let's demand them in the constructor and 
> don't store Integers that can be null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9268) General improvements in FpgaDevice

2019-03-22 Thread Peter Bacsko (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-9268:
---
Attachment: YARN-9268-006.patch

> General improvements in FpgaDevice
> --
>
> Key: YARN-9268
> URL: https://issues.apache.org/jira/browse/YARN-9268
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9268-001.patch, YARN-9268-002.patch, 
> YARN-9268-003.patch, YARN-9268-004.patch, YARN-9268-005.patch, 
> YARN-9268-006.patch
>
>
> Need to fix the following in the class {{FpgaDevice}}:
>  * It implements {{Comparable}}, but returns 0 in every case. There is no 
> natural ordering among FPGA devices, perhaps "acl0" comes before "acl1", but 
> this seems too forced and unnecessary.We think this class should not 
> implement {{Comparable}} at all, at least not like that.
>  * Stores unnecessary fields: devName, busNum, temperature, power usage. For 
> one, these are never needed in the code. Secondly, temp and power usage 
> changes constantly. It's pointless to store these in this POJO.
>  * {{serialVersionUID}} is 1L - let's generate a number for this
>  * Use {{int}} instead of {{Integer}} - don't allow nulls. If major/minor 
> uniquely identifies the card, then let's demand them in the constructor and 
> don't store Integers that can be null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9403) GET /apps/{appid}/entities/YARN_APPLICATION accesses application table instead of entity table

2019-03-22 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9403:

Description: 
{noformat}"GET /apps/{appid}/entities/YARN_APPLICATION"{noformat} accesses 
application table instead of entity table. As per the doc, With this API, you 
can query generic entities identified by cluster ID, application ID and 
per-framework entity type. But it also provides all the apps when entityType is 
set to YARN_APPLICATION. It should only access Entity Table through 
{{GenericEntityReader}}.

Wrong Output: With YARN_APPLICATION entityType, all applications listed from 
application tables.

{code}
[hbase@yarn-ats-3 centos]$ curl -s 
"http://yarn-ats-3:8198/ws/v2/timeline/apps/application_1553258815132_0002/entities/YARN_APPLICATION?user.name=hbase=hbase=word%20count;
 | jq .
[
  {
"metrics": [],
"events": [],
"createdtime": 1553258922721,
"idprefix": 0,
"isrelatedto": {},
"relatesto": {},
"info": {
  "UID": "ats!application_1553258815132_0002",
  "FROM_ID": "ats!hbase!word 
count!1553258922721!application_1553258815132_0002"
},
"configs": {},
"type": "YARN_APPLICATION",
"id": "application_1553258815132_0002"
  },
  {
"metrics": [],
"events": [],
"createdtime": 1553258825918,
"idprefix": 0,
"isrelatedto": {},
"relatesto": {},
"info": {
  "UID": "ats!application_1553258815132_0001",
  "FROM_ID": "ats!hbase!word 
count!1553258825918!application_1553258815132_0001"
},
"configs": {},
"type": "YARN_APPLICATION",
"id": "application_1553258815132_0001"
  }
]
{code}


Right Output: With correct entity type (MAPREDUCE_JOB) it accesses entity table 
for given applicationId and entityType.

{code}
[hbase@yarn-ats-3 centos]$ curl -s 
"http://yarn-ats-3:8198/ws/v2/timeline/apps/application_1553258815132_0002/entities/MAPREDUCE_JOB?user.name=hbase=hbase=word%20count;
 | jq .
[
  {
"metrics": [],
"events": [],
"createdtime": 1553258926667,
"idprefix": 0,
"isrelatedto": {},
"relatesto": {},
"info": {
  "UID": 
"ats!application_1553258815132_0002!MAPREDUCE_JOB!0!job_1553258815132_0002",
  "FROM_ID": "ats!hbase!word 
count!1553258922721!application_1553258815132_0002!MAPREDUCE_JOB!0!job_1553258815132_0002"
},
"configs": {},
"type": "MAPREDUCE_JOB",
"id": "job_1553258815132_0002"
  }
]
{code}

  was:
"GET /apps/{appid}/entities/YARN_APPLICATION" accesses application table 
instead of entity table. As per the doc, With this API, you can query generic 
entities identified by cluster ID, application ID and per-framework entity 
type. But it also provides all the apps when entityType is set to 
YARN_APPLICATION. It should only access Entity Table through 
{{GenericEntityReader}}.

Wrong Output: With YARN_APPLICATION entityType, all applications listed from 
application tables.

{code}
[hbase@yarn-ats-3 centos]$ curl -s 
"http://yarn-ats-3:8198/ws/v2/timeline/apps/application_1553258815132_0002/entities/YARN_APPLICATION?user.name=hbase=hbase=word%20count;
 | jq .
[
  {
"metrics": [],
"events": [],
"createdtime": 1553258922721,
"idprefix": 0,
"isrelatedto": {},
"relatesto": {},
"info": {
  "UID": "ats!application_1553258815132_0002",
  "FROM_ID": "ats!hbase!word 
count!1553258922721!application_1553258815132_0002"
},
"configs": {},
"type": "YARN_APPLICATION",
"id": "application_1553258815132_0002"
  },
  {
"metrics": [],
"events": [],
"createdtime": 1553258825918,
"idprefix": 0,
"isrelatedto": {},
"relatesto": {},
"info": {
  "UID": "ats!application_1553258815132_0001",
  "FROM_ID": "ats!hbase!word 
count!1553258825918!application_1553258815132_0001"
},
"configs": {},
"type": "YARN_APPLICATION",
"id": "application_1553258815132_0001"
  }
]
{code}


Right Output: With correct entity type (MAPREDUCE_JOB) it accesses entity table 
for given applicationId and entityType.

{code}
[hbase@yarn-ats-3 centos]$ curl -s 
"http://yarn-ats-3:8198/ws/v2/timeline/apps/application_1553258815132_0002/entities/MAPREDUCE_JOB?user.name=hbase=hbase=word%20count;
 | jq .
[
  {
"metrics": [],
"events": [],
"createdtime": 1553258926667,
"idprefix": 0,
"isrelatedto": {},
"relatesto": {},
"info": {
  "UID": 
"ats!application_1553258815132_0002!MAPREDUCE_JOB!0!job_1553258815132_0002",
  "FROM_ID": "ats!hbase!word 
count!1553258922721!application_1553258815132_0002!MAPREDUCE_JOB!0!job_1553258815132_0002"
},
"configs": {},
"type": "MAPREDUCE_JOB",
"id": "job_1553258815132_0002"
  }
]
{code}


> GET /apps/{appid}/entities/YARN_APPLICATION accesses application table 
> instead of entity table
> --
>
> Key: YARN-9403
> 

[jira] [Updated] (YARN-9403) GET /apps/{appid}/entities/YARN_APPLICATION accesses application table instead of entity table

2019-03-22 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9403:

Description: 
"GET /apps/{appid}/entities/YARN_APPLICATION" accesses application table 
instead of entity table. As per the doc, With this API, you can query generic 
entities identified by cluster ID, application ID and per-framework entity 
type. But it also provides all the apps when entityType is set to 
YARN_APPLICATION. It should only access Entity Table through 
{{GenericEntityReader}}.

Wrong Output: With YARN_APPLICATION entityType, all applications listed from 
application tables.

{code}
[hbase@yarn-ats-3 centos]$ curl -s 
"http://yarn-ats-3:8198/ws/v2/timeline/apps/application_1553258815132_0002/entities/YARN_APPLICATION?user.name=hbase=hbase=word%20count;
 | jq .
[
  {
"metrics": [],
"events": [],
"createdtime": 1553258922721,
"idprefix": 0,
"isrelatedto": {},
"relatesto": {},
"info": {
  "UID": "ats!application_1553258815132_0002",
  "FROM_ID": "ats!hbase!word 
count!1553258922721!application_1553258815132_0002"
},
"configs": {},
"type": "YARN_APPLICATION",
"id": "application_1553258815132_0002"
  },
  {
"metrics": [],
"events": [],
"createdtime": 1553258825918,
"idprefix": 0,
"isrelatedto": {},
"relatesto": {},
"info": {
  "UID": "ats!application_1553258815132_0001",
  "FROM_ID": "ats!hbase!word 
count!1553258825918!application_1553258815132_0001"
},
"configs": {},
"type": "YARN_APPLICATION",
"id": "application_1553258815132_0001"
  }
]
{code}


Right Output: With correct entity type (MAPREDUCE_JOB) it accesses entity table 
for given applicationId and entityType.

{code}
[hbase@yarn-ats-3 centos]$ curl -s 
"http://yarn-ats-3:8198/ws/v2/timeline/apps/application_1553258815132_0002/entities/MAPREDUCE_JOB?user.name=hbase=hbase=word%20count;
 | jq .
[
  {
"metrics": [],
"events": [],
"createdtime": 1553258926667,
"idprefix": 0,
"isrelatedto": {},
"relatesto": {},
"info": {
  "UID": 
"ats!application_1553258815132_0002!MAPREDUCE_JOB!0!job_1553258815132_0002",
  "FROM_ID": "ats!hbase!word 
count!1553258922721!application_1553258815132_0002!MAPREDUCE_JOB!0!job_1553258815132_0002"
},
"configs": {},
"type": "MAPREDUCE_JOB",
"id": "job_1553258815132_0002"
  }
]
{code}

  was:
GET /apps/{appid}/entities/YARN_APPLICATION accesses application table instead 
of entity table. As per the doc, With this API, you can query generic entities 
identified by cluster ID, application ID and per-framework entity type. But it 
also provides all the apps when entityType is set to YARN_APPLICATION. It 
should only access Entity Table through {{GenericEntityReader}}.

Wrong Output: With YARN_APPLICATION entityType, all applications listed from 
application tables.

{code}
[hbase@yarn-ats-3 centos]$ curl -s 
"http://yarn-ats-3:8198/ws/v2/timeline/apps/application_1553258815132_0002/entities/YARN_APPLICATION?user.name=hbase=hbase=word%20count;
 | jq .
[
  {
"metrics": [],
"events": [],
"createdtime": 1553258922721,
"idprefix": 0,
"isrelatedto": {},
"relatesto": {},
"info": {
  "UID": "ats!application_1553258815132_0002",
  "FROM_ID": "ats!hbase!word 
count!1553258922721!application_1553258815132_0002"
},
"configs": {},
"type": "YARN_APPLICATION",
"id": "application_1553258815132_0002"
  },
  {
"metrics": [],
"events": [],
"createdtime": 1553258825918,
"idprefix": 0,
"isrelatedto": {},
"relatesto": {},
"info": {
  "UID": "ats!application_1553258815132_0001",
  "FROM_ID": "ats!hbase!word 
count!1553258825918!application_1553258815132_0001"
},
"configs": {},
"type": "YARN_APPLICATION",
"id": "application_1553258815132_0001"
  }
]
{code}


Right Output: With correct entity type (MAPREDUCE_JOB) it accesses entity table 
for given applicationId and entityType.

{code}
[hbase@yarn-ats-3 centos]$ curl -s 
"http://yarn-ats-3:8198/ws/v2/timeline/apps/application_1553258815132_0002/entities/MAPREDUCE_JOB?user.name=hbase=hbase=word%20count;
 | jq .
[
  {
"metrics": [],
"events": [],
"createdtime": 1553258926667,
"idprefix": 0,
"isrelatedto": {},
"relatesto": {},
"info": {
  "UID": 
"ats!application_1553258815132_0002!MAPREDUCE_JOB!0!job_1553258815132_0002",
  "FROM_ID": "ats!hbase!word 
count!1553258922721!application_1553258815132_0002!MAPREDUCE_JOB!0!job_1553258815132_0002"
},
"configs": {},
"type": "MAPREDUCE_JOB",
"id": "job_1553258815132_0002"
  }
]
{code}


> GET /apps/{appid}/entities/YARN_APPLICATION accesses application table 
> instead of entity table
> --
>
> Key: YARN-9403
> URL: 

[jira] [Created] (YARN-9403) GET /apps/{appid}/entities/YARN_APPLICATION accesses application table instead of entity table

2019-03-22 Thread Prabhu Joseph (JIRA)
Prabhu Joseph created YARN-9403:
---

 Summary: GET /apps/{appid}/entities/YARN_APPLICATION accesses 
application table instead of entity table
 Key: YARN-9403
 URL: https://issues.apache.org/jira/browse/YARN-9403
 Project: Hadoop YARN
  Issue Type: Bug
  Components: ATSv2
Affects Versions: 3.2.0
Reporter: Prabhu Joseph
Assignee: Prabhu Joseph


GET /apps/{appid}/entities/YARN_APPLICATION accesses application table instead 
of entity table. As per the doc, With this API, you can query generic entities 
identified by cluster ID, application ID and per-framework entity type. But it 
also provides all the apps when entityType is set to YARN_APPLICATION. It 
should only access Entity Table through {{GenericEntityReader}}.

Wrong Output: With YARN_APPLICATION entityType, all applications listed from 
application tables.

{code}
[hbase@yarn-ats-3 centos]$ curl -s 
"http://yarn-ats-3:8198/ws/v2/timeline/apps/application_1553258815132_0002/entities/YARN_APPLICATION?user.name=hbase=hbase=word%20count;
 | jq .
[
  {
"metrics": [],
"events": [],
"createdtime": 1553258922721,
"idprefix": 0,
"isrelatedto": {},
"relatesto": {},
"info": {
  "UID": "ats!application_1553258815132_0002",
  "FROM_ID": "ats!hbase!word 
count!1553258922721!application_1553258815132_0002"
},
"configs": {},
"type": "YARN_APPLICATION",
"id": "application_1553258815132_0002"
  },
  {
"metrics": [],
"events": [],
"createdtime": 1553258825918,
"idprefix": 0,
"isrelatedto": {},
"relatesto": {},
"info": {
  "UID": "ats!application_1553258815132_0001",
  "FROM_ID": "ats!hbase!word 
count!1553258825918!application_1553258815132_0001"
},
"configs": {},
"type": "YARN_APPLICATION",
"id": "application_1553258815132_0001"
  }
]
{code}


Right Output: With correct entity type (MAPREDUCE_JOB) it accesses entity table 
for given applicationId and entityType.

{code}
[hbase@yarn-ats-3 centos]$ curl -s 
"http://yarn-ats-3:8198/ws/v2/timeline/apps/application_1553258815132_0002/entities/MAPREDUCE_JOB?user.name=hbase=hbase=word%20count;
 | jq .
[
  {
"metrics": [],
"events": [],
"createdtime": 1553258926667,
"idprefix": 0,
"isrelatedto": {},
"relatesto": {},
"info": {
  "UID": 
"ats!application_1553258815132_0002!MAPREDUCE_JOB!0!job_1553258815132_0002",
  "FROM_ID": "ats!hbase!word 
count!1553258922721!application_1553258815132_0002!MAPREDUCE_JOB!0!job_1553258815132_0002"
},
"configs": {},
"type": "MAPREDUCE_JOB",
"id": "job_1553258815132_0002"
  }
]
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8967) Change FairScheduler to use PlacementRule interface

2019-03-22 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799020#comment-16799020
 ] 

Wilfred Spiegelenburg commented on YARN-8967:
-

3) I need two pieces back from the child when we have a rule so that is what 
was hampering the simple move. I was also hesitant because of the possibility 
to add new child nodes, beside the parent rule, specifically for introducing 
filters on some of the rules. I think the use of a method for just retrieving 
the element is simple enough and does not hamper the changes I have been 
looking at.

5) yes they should have been private and final

Updated the patch with the two changes [^YARN-8967.011.patch] 

> Change FairScheduler to use PlacementRule interface
> ---
>
> Key: YARN-8967
> URL: https://issues.apache.org/jira/browse/YARN-8967
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler, fairscheduler
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-8967.001.patch, YARN-8967.002.patch, 
> YARN-8967.003.patch, YARN-8967.004.patch, YARN-8967.005.patch, 
> YARN-8967.006.patch, YARN-8967.007.patch, YARN-8967.008.patch, 
> YARN-8967.009.patch, YARN-8967.010.patch, YARN-8967.011.patch
>
>
> The PlacementRule interface was introduced to be used by all schedulers as 
> per YARN-3635. The CapacityScheduler is using it but the FairScheduler is not 
> and is using its own rule definition.
> YARN-8948 cleans up the implementation and removes the CS references which 
> should allow this change to go through.
> This would be the first step in using one placement rule engine for both 
> schedulers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8967) Change FairScheduler to use PlacementRule interface

2019-03-22 Thread Wilfred Spiegelenburg (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg updated YARN-8967:

Attachment: YARN-8967.011.patch

> Change FairScheduler to use PlacementRule interface
> ---
>
> Key: YARN-8967
> URL: https://issues.apache.org/jira/browse/YARN-8967
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler, fairscheduler
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-8967.001.patch, YARN-8967.002.patch, 
> YARN-8967.003.patch, YARN-8967.004.patch, YARN-8967.005.patch, 
> YARN-8967.006.patch, YARN-8967.007.patch, YARN-8967.008.patch, 
> YARN-8967.009.patch, YARN-8967.010.patch, YARN-8967.011.patch
>
>
> The PlacementRule interface was introduced to be used by all schedulers as 
> per YARN-3635. The CapacityScheduler is using it but the FairScheduler is not 
> and is using its own rule definition.
> YARN-8948 cleans up the implementation and removes the CS references which 
> should allow this change to go through.
> This would be the first step in using one placement rule engine for both 
> schedulers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9358) Add javadoc to new methods introduced in FSQueueMetrics with YARN-9322

2019-03-22 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798935#comment-16798935
 ] 

Hudson commented on YARN-9358:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16262 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16262/])
YARN-9358. Add javadoc to new methods introduced in FSQueueMetrics with 
(templedf: rev ce5eb9cb2e04baf2e94fdc7dcdb57d0404cf6e76)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSQueueMetrics.java


> Add javadoc to new methods introduced in FSQueueMetrics with YARN-9322
> --
>
> Key: YARN-9358
> URL: https://issues.apache.org/jira/browse/YARN-9358
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Zoltan Siegl
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9358.001.patch, YARN-9358.002.patch, 
> YARN-9358.003.patch, YARN-9358.004.patch, YARN-9358.005.patch
>
>
> This is a follow-up for YARN-9322, covering javadoc changes as discussed with 
> [~templedf] earlier.
> As discussed with Daniel, we need to add javadoc for the new methods 
> introduced with YARN-9322 and also for the modified methods. 
> The javadoc should refer to the fact that Resource Types are also included in 
> the Resource object in case of get/set as well.
> The methods are: 
> 1. getFairShare / setFairShare
> 2. getSteadyFairShare / setSteadyFairShare
> 3. getMinShare / setMinShare
> 4. getMaxShare / setMaxShare
> 5. getMaxAMShare / setMaxAMShare
> 6. getAMResourceUsage / setAMResourceUsage
> Moreover, a javadoc could be added to the constructor of FSQueueMetrics as 
> well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8967) Change FairScheduler to use PlacementRule interface

2019-03-22 Thread Yufei Gu (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798814#comment-16798814
 ] 

Yufei Gu commented on YARN-8967:


Hi [~wilfreds], thanks for the patch.
3) Yeah, the xml DOM looks like a little bit silly. getChildNodes() at least 
should provide an option to return only elements rather than childs mixed with 
elements and texts. I believe some new libs should solve this issue. We could 
do something like this to hide second loop in a method getParentNode(). 
{code}
Element parentNode = getParentNode(node.getChildNodes());
PlacementRule parentRule = getParentRule(parentNode, fs);
 {code}
4) That's nice.
5) I do think the current solution is better. Let's ignore this checkstyle 
warning. Just one concern, can we make both member in class RuleMap “final”? So 
that no code can change their value except the constructor.

> Change FairScheduler to use PlacementRule interface
> ---
>
> Key: YARN-8967
> URL: https://issues.apache.org/jira/browse/YARN-8967
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler, fairscheduler
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-8967.001.patch, YARN-8967.002.patch, 
> YARN-8967.003.patch, YARN-8967.004.patch, YARN-8967.005.patch, 
> YARN-8967.006.patch, YARN-8967.007.patch, YARN-8967.008.patch, 
> YARN-8967.009.patch, YARN-8967.010.patch
>
>
> The PlacementRule interface was introduced to be used by all schedulers as 
> per YARN-3635. The CapacityScheduler is using it but the FairScheduler is not 
> and is using its own rule definition.
> YARN-8948 cleans up the implementation and removes the CS references which 
> should allow this change to go through.
> This would be the first step in using one placement rule engine for both 
> schedulers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9366) Make logs in TimelineClient implementation specific to application

2019-03-22 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798782#comment-16798782
 ] 

Hadoop QA commented on YARN-9366:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
44s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9366 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961694/YARN-9366.v1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 423cd4f76940 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 90afc9a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23784/testReport/ |
| Max. process+thread count | 447 (vs. ulimit of 1) |
| modules | C: 

[jira] [Resolved] (YARN-6712) Moving logging APIs over to slf4j in hadoop-yarn

2019-03-22 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved YARN-6712.
-
Resolution: Done

All the sub-tasks are resolved. Closing.
Thank you all who contributed to this big task!

> Moving logging APIs over to slf4j in hadoop-yarn
> 
>
> Key: YARN-6712
> URL: https://issues.apache.org/jira/browse/YARN-6712
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: YARN-6712.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9382) Publish container killed, paused and resumed events to ATSv2.

2019-03-22 Thread Vrushali C (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798744#comment-16798744
 ] 

Vrushali C commented on YARN-9382:
--

Thanks Abhishek! Patch v1 LGTM.

Could you update the checkstyle warnings? 

{code}
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/timelineservice/NMTimelinePublisher.java:277:

entity.setIdPrefix(TimelineServiceHelper.invertLong(containerStartTime));: Line 
is longer than 80 characters (found 81). [LineLength]
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/timelineservice/TestNMTimelinePublisher.java:27:import
 java.util.HashMap;:8: Unused import - java.util.HashMap. [UnusedImports]
{code}

Also, I was wondering, if we can call it 
"publishContainerLifeCycleGenericEvent" instead of 
"publishContainerGenericEvent" ? Or perhaps Generic Event is fine too. 

One other question I had was, should KILLED event have anything that FINISHED 
event (publishContainerFinishedEvent) has too? Like the diagnostics, exit info 
etc?



> Publish container killed, paused and resumed events to ATSv2.
> -
>
> Key: YARN-9382
> URL: https://issues.apache.org/jira/browse/YARN-9382
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-9382.001.patch
>
>
> There are some events missing in container lifecycle. We need to add support 
> for adding events for when container gets killed, paused and resumed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9366) Make logs in TimelineClient implementation specific to application

2019-03-22 Thread Vrushali C (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798740#comment-16798740
 ] 

Vrushali C commented on YARN-9366:
--

Thanks [~prabham] , Patch v1 LGTM. 

Will wait for jenkins for overall checks and then commit. 

> Make logs in TimelineClient implementation specific to application 
> ---
>
> Key: YARN-9366
> URL: https://issues.apache.org/jira/browse/YARN-9366
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: ATSv2
>Reporter: Prabha Manepalli
>Assignee: Prabha Manepalli
>Priority: Minor
> Attachments: YARN-9366.v1.patch
>
>
> For every container launched on a NM node, a timeline client is created to 
> publish entities to the corresponding application's timeline collector. And 
> there would be multiple timeline clients running at the same time. Current 
> implementation of timeline client logs are insufficient to isolate publishing 
> problems related to one application. Hence, creating this Jira to improvise 
> the logs in TimelineV2ClientImpl.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9395) Short Names for repeated Hbase Column names

2019-03-22 Thread Vrushali C (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798738#comment-16798738
 ] 

Vrushali C commented on YARN-9395:
--

Very good jira Prabhu. I agree that majority of the counter names are going to 
be repeated over and over again across jobs. Please do give it some thought and 
let's discuss about potential solutions. 

With Phoenix, since they have a predefined schema for a table, it is an option 
to have a mapping for a column name to a number. 


> Short Names for repeated Hbase Column names
> ---
>
> Key: YARN-9395
> URL: https://issues.apache.org/jira/browse/YARN-9395
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: ATSv2
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>
> Currently ATS HBase tables stores the config name / metric name as column 
> names which are long. This repeats for all the rows and consumes lot of 
> storage space. And we have seen Customers Hbase Tables already consumes more 
> than 1.5 TB in few days
> {code}
> Example Configs:
> c:yarn.timeline-service.webapp.rest-csrf.methods-to-ignore
> c:yarn.timeline-service.entity-group-fs-store.active-dir
> c:yarn.scheduler.configuration.zk-store.parent-path
> Example Metrics:
> m:REDUCE:org.apache.hadoop.mapreduce.FileSystemCounter:HDFS_READ_OPS
> m:REDUCE:org.apache.hadoop.mapreduce.TaskCounter:COMBINE_INPUT_RECORDS
> m:REDUCE:org.apache.hadoop.mapreduce.TaskCounter:PHYSICAL_MEMORY_BYTES
> {code}
> We need to use short column names as per Hbase Best Practice - 
> http://moi.vonos.net/bigdata/avro-hbase-colnames/ But the challenge is ATS 
> does not know the column names until the rows get inserted. We can provide a 
> mapping file to map the repeated configs / metrics / info from different 
> applications to unique numbers which customers can configure upfront to save 
> the storage space. Similar to what Phoenix does
> https://blogs.apache.org/phoenix/entry/column-mapping-and-immutable-data
> https://phoenix.apache.org/columnencoding.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org