[jira] [Commented] (YARN-9268) General improvements in FpgaDevice

2019-03-21 Thread Devaraj K (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798723#comment-16798723
 ] 

Devaraj K commented on YARN-9268:
-

Thanks [~pbacsko] for quickly updating the patch.

* FpgaResourceAllocator.java
** {{aliasDevName}} is used in {{hashCode()}} but not in {{equals()}}.
** There are some fields not used in {{hashCode()}} and {{equals()}}, don't we 
need to include here?
** can you correct the typo here,
{code}
//key is requetor, aka. container ID
{code}

* TestFpgaResourceHandler.java
** Seems this change is not needed, same applies for all occurrences in this 
test class.

{code}
-  for (FpgaDevice device : allowedDevices) {
+  for (FpgaResourceAllocator.FpgaDevice device : allowedDevices) {
{code}

> General improvements in FpgaDevice
> --
>
> Key: YARN-9268
> URL: https://issues.apache.org/jira/browse/YARN-9268
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9268-001.patch, YARN-9268-002.patch, 
> YARN-9268-003.patch, YARN-9268-004.patch, YARN-9268-005.patch
>
>
> Need to fix the following in the class {{FpgaDevice}}:
>  * It implements {{Comparable}}, but returns 0 in every case. There is no 
> natural ordering among FPGA devices, perhaps "acl0" comes before "acl1", but 
> this seems too forced and unnecessary.We think this class should not 
> implement {{Comparable}} at all, at least not like that.
>  * Stores unnecessary fields: devName, busNum, temperature, power usage. For 
> one, these are never needed in the code. Secondly, temp and power usage 
> changes constantly. It's pointless to store these in this POJO.
>  * {{serialVersionUID}} is 1L - let's generate a number for this
>  * Use {{int}} instead of {{Integer}} - don't allow nulls. If major/minor 
> uniquely identifies the card, then let's demand them in the constructor and 
> don't store Integers that can be null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9386) destroying yarn-service is allowed even though running state

2019-03-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798706#comment-16798706
 ] 

Hadoop QA commented on YARN-9386:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api:
 The patch generated 3 new + 5 unchanged - 0 fixed = 8 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
49s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9386 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963352/YARN-9386.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 36d353d0cf68 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 90afc9a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/23783/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-api.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23783/testReport/ |
| Max. process+thread count | 559 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (YARN-7129) Application Catalog for YARN applications

2019-03-21 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798680#comment-16798680
 ] 

Eric Yang commented on YARN-7129:
-

[~jeagles] Solr dependencies runs in the application catalog docker container.  
There is no solr dependencies added to any Hadoop services.  Hence, no 
interference to customer's applications.

{quote}In addition, I see this patch adds another class named YarnClient. Is 
there a way to name this class differently to remove this conflation?{quote}

This is not the first time that Yarn uses the same class name in different 
packages.  YarnClient is already prefixed with different packages name.  This 
class runs in a isolated JVM in YARN framework.  The chance of other people 
getting it wrong is low because the other YarnClient is abstract and private 
api.  I highly doubt YarnClient is going to cause a problem, but I can call it 
YarnServiceClient to make the distinction.

> Application Catalog for YARN applications
> -
>
> Key: YARN-7129
> URL: https://issues.apache.org/jira/browse/YARN-7129
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN Appstore.pdf, YARN-7129.001.patch, 
> YARN-7129.002.patch, YARN-7129.003.patch, YARN-7129.004.patch, 
> YARN-7129.005.patch, YARN-7129.006.patch, YARN-7129.007.patch, 
> YARN-7129.008.patch, YARN-7129.009.patch, YARN-7129.010.patch, 
> YARN-7129.011.patch, YARN-7129.012.patch, YARN-7129.013.patch, 
> YARN-7129.014.patch, YARN-7129.015.patch, YARN-7129.016.patch, 
> YARN-7129.017.patch, YARN-7129.018.patch, YARN-7129.019.patch, 
> YARN-7129.020.patch, YARN-7129.021.patch, YARN-7129.022.patch, 
> YARN-7129.023.patch, YARN-7129.024.patch, YARN-7129.025.patch, 
> YARN-7129.026.patch, YARN-7129.027.patch, YARN-7129.028.patch, 
> YARN-7129.029.patch, YARN-7129.030.patch, YARN-7129.031.patch, 
> YARN-7129.032.patch
>
>
> YARN native services provides web services API to improve usability of 
> application deployment on Hadoop using collection of docker images.  It would 
> be nice to have an application catalog system which provides an editorial and 
> search interface for YARN applications.  This improves usability of YARN for 
> manage the life cycle of applications.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9386) destroying yarn-service is allowed even though running state

2019-03-21 Thread kyungwan nam (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798650#comment-16798650
 ] 

kyungwan nam commented on YARN-9386:


attaches a new patch, which fixes test code.

> destroying yarn-service is allowed even though running state
> 
>
> Key: YARN-9386
> URL: https://issues.apache.org/jira/browse/YARN-9386
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: kyungwan nam
>Assignee: kyungwan nam
>Priority: Major
> Attachments: YARN-9386.001.patch, YARN-9386.002.patch
>
>
> It looks very dangerous to destroy a running app. It should not be allowed.
> {code}
> [yarn-ats@test ~]$ yarn app -list
> 19/03/12 17:48:49 INFO client.RMProxy: Connecting to ResourceManager at 
> test1.com/10.1.1.11:8050
> 19/03/12 17:48:50 INFO client.AHSProxy: Connecting to Application History 
> server at test1.com/10.1.1.101:10200
> Total number of applications (application-types: [], states: [SUBMITTED, 
> ACCEPTED, RUNNING] and tags: []):3
> Application-Id  Application-NameApplication-Type  
> User   Queue   State Final-State  
>ProgressTracking-URL
> application_1551250841677_0003fbyarn-service  
>ambari-qa default RUNNING   UNDEFINED  
>100% N/A
> application_1552379723611_0002   fb1yarn-service  
> yarn-ats default RUNNING   UNDEFINED  
>100% N/A
> application_1550801435420_0001 ats-hbaseyarn-service  
> yarn-ats default RUNNING   UNDEFINED  
>100% N/A
> [yarn-ats@test ~]$ yarn app -destroy fb1
> 19/03/12 17:49:02 INFO client.RMProxy: Connecting to ResourceManager at 
> test1.com/10.1.1.11:8050
> 19/03/12 17:49:02 INFO client.AHSProxy: Connecting to Application History 
> server at test1.com/10.1.1.101:10200
> 19/03/12 17:49:02 INFO client.RMProxy: Connecting to ResourceManager at 
> test1.com/10.1.1.11:8050
> 19/03/12 17:49:02 INFO client.AHSProxy: Connecting to Application History 
> server at test1.com/10.1.1.101:10200
> 19/03/12 17:49:02 INFO util.log: Logging initialized @1637ms
> 19/03/12 17:49:07 INFO client.ApiServiceClient: Successfully destroyed 
> service fb1
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9386) destroying yarn-service is allowed even though running state

2019-03-21 Thread kyungwan nam (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kyungwan nam updated YARN-9386:
---
Attachment: YARN-9386.002.patch

> destroying yarn-service is allowed even though running state
> 
>
> Key: YARN-9386
> URL: https://issues.apache.org/jira/browse/YARN-9386
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: kyungwan nam
>Assignee: kyungwan nam
>Priority: Major
> Attachments: YARN-9386.001.patch, YARN-9386.002.patch
>
>
> It looks very dangerous to destroy a running app. It should not be allowed.
> {code}
> [yarn-ats@test ~]$ yarn app -list
> 19/03/12 17:48:49 INFO client.RMProxy: Connecting to ResourceManager at 
> test1.com/10.1.1.11:8050
> 19/03/12 17:48:50 INFO client.AHSProxy: Connecting to Application History 
> server at test1.com/10.1.1.101:10200
> Total number of applications (application-types: [], states: [SUBMITTED, 
> ACCEPTED, RUNNING] and tags: []):3
> Application-Id  Application-NameApplication-Type  
> User   Queue   State Final-State  
>ProgressTracking-URL
> application_1551250841677_0003fbyarn-service  
>ambari-qa default RUNNING   UNDEFINED  
>100% N/A
> application_1552379723611_0002   fb1yarn-service  
> yarn-ats default RUNNING   UNDEFINED  
>100% N/A
> application_1550801435420_0001 ats-hbaseyarn-service  
> yarn-ats default RUNNING   UNDEFINED  
>100% N/A
> [yarn-ats@test ~]$ yarn app -destroy fb1
> 19/03/12 17:49:02 INFO client.RMProxy: Connecting to ResourceManager at 
> test1.com/10.1.1.11:8050
> 19/03/12 17:49:02 INFO client.AHSProxy: Connecting to Application History 
> server at test1.com/10.1.1.101:10200
> 19/03/12 17:49:02 INFO client.RMProxy: Connecting to ResourceManager at 
> test1.com/10.1.1.11:8050
> 19/03/12 17:49:02 INFO client.AHSProxy: Connecting to Application History 
> server at test1.com/10.1.1.101:10200
> 19/03/12 17:49:02 INFO util.log: Logging initialized @1637ms
> 19/03/12 17:49:07 INFO client.ApiServiceClient: Successfully destroyed 
> service fb1
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7129) Application Catalog for YARN applications

2019-03-21 Thread Jonathan Eagles (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798647#comment-16798647
 ] 

Jonathan Eagles commented on YARN-7129:
---

[~eyang], excuse me since I'm not exactly sure, but does this patch add a new 
dependencies on apache solr and therefore lucene, transitively. If so, does 
this impact customers already having a solr or lucene dependencies?

In addition, I see this patch adds another class named YarnClient. Is there a 
way to name this class differently to remove this conflation?

My goal isn't to review this code, since that has already been done. But the 
whole community needs to support it, myself included, and to understand what 
parts might impact its users.

> Application Catalog for YARN applications
> -
>
> Key: YARN-7129
> URL: https://issues.apache.org/jira/browse/YARN-7129
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN Appstore.pdf, YARN-7129.001.patch, 
> YARN-7129.002.patch, YARN-7129.003.patch, YARN-7129.004.patch, 
> YARN-7129.005.patch, YARN-7129.006.patch, YARN-7129.007.patch, 
> YARN-7129.008.patch, YARN-7129.009.patch, YARN-7129.010.patch, 
> YARN-7129.011.patch, YARN-7129.012.patch, YARN-7129.013.patch, 
> YARN-7129.014.patch, YARN-7129.015.patch, YARN-7129.016.patch, 
> YARN-7129.017.patch, YARN-7129.018.patch, YARN-7129.019.patch, 
> YARN-7129.020.patch, YARN-7129.021.patch, YARN-7129.022.patch, 
> YARN-7129.023.patch, YARN-7129.024.patch, YARN-7129.025.patch, 
> YARN-7129.026.patch, YARN-7129.027.patch, YARN-7129.028.patch, 
> YARN-7129.029.patch, YARN-7129.030.patch, YARN-7129.031.patch, 
> YARN-7129.032.patch
>
>
> YARN native services provides web services API to improve usability of 
> application deployment on Hadoop using collection of docker images.  It would 
> be nice to have an application catalog system which provides an editorial and 
> search interface for YARN applications.  This improves usability of YARN for 
> manage the life cycle of applications.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8967) Change FairScheduler to use PlacementRule interface

2019-03-21 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798645#comment-16798645
 ] 

Wilfred Spiegelenburg commented on YARN-8967:
-

Junit test failure is not related.
The checkstyle is from this patch but it makes the internal class RuleMap so 
much simpler that I propose we leave it like it is. [~yufeigu]: the checkstyle 
was why I introduced the getters etc which is the basis for your comment number 
5) before.

> Change FairScheduler to use PlacementRule interface
> ---
>
> Key: YARN-8967
> URL: https://issues.apache.org/jira/browse/YARN-8967
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler, fairscheduler
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-8967.001.patch, YARN-8967.002.patch, 
> YARN-8967.003.patch, YARN-8967.004.patch, YARN-8967.005.patch, 
> YARN-8967.006.patch, YARN-8967.007.patch, YARN-8967.008.patch, 
> YARN-8967.009.patch, YARN-8967.010.patch
>
>
> The PlacementRule interface was introduced to be used by all schedulers as 
> per YARN-3635. The CapacityScheduler is using it but the FairScheduler is not 
> and is using its own rule definition.
> YARN-8948 cleans up the implementation and removes the CS references which 
> should allow this change to go through.
> This would be the first step in using one placement rule engine for both 
> schedulers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9401) Fix `yarn version` print the version info is the same as `hadoop version`

2019-03-21 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797946#comment-16797946
 ] 

Wanqiang Ji edited comment on YARN-9401 at 3/22/19 1:56 AM:


Hi, [~eyang]. Can you help to review this?


was (Author: jiwq):
Hi, [~eyang]

Can you help to review this?

> Fix `yarn version` print the version info is the same as `hadoop version`
> -
>
> Key: YARN-9401
> URL: https://issues.apache.org/jira/browse/YARN-9401
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Minor
> Attachments: YARN-9401.001.patch, YARN-9401.002.patch
>
>
> It's caused by in `yarn` shell used `org.apache.hadoop.util.VersionInfo` 
> instead of `org.apache.hadoop.yarn.util.YarnVersionInfo` as the 
> `HADOOP_CLASSNAME` by mistake.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9292) Implement logic to keep docker image consistent in application that uses :latest tag

2019-03-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798627#comment-16798627
 ] 

Hadoop QA commented on YARN-9292:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
37s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 29 unchanged - 3 fixed = 29 total (was 32) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
58s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
52s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9292 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963337/YARN-9292.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux e3b7efb1e845 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 90afc9a |
| maven | 

[jira] [Comment Edited] (YARN-9272) Backport YARN-7738 for refreshing max allocation for multiple resource types

2019-03-21 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798597#comment-16798597
 ] 

Jonathan Hung edited comment on YARN-9272 at 3/22/19 1:02 AM:
--

* TestNodeLabelContainerAllocation also fails locally pre-patch.
 * TestCapacityOverTimePolicy passes locally.
 * TestOpportunisticContainerAllocatorAMService seems flaky, it passes and 
fails intermittently locally.
 * TestReservationSystemWithRMHA failures look related.


was (Author: jhung):
* TestNodeLabelContainerAllocation also fails locally pre-patch.
 * TestCapacityOverTimePolicy passes locally.
 * TestOpportunisticContainerAllocatorAMService and 
TestReservationSystemWithRMHA failures look related.

> Backport YARN-7738 for refreshing max allocation for multiple resource types
> 
>
> Key: YARN-9272
> URL: https://issues.apache.org/jira/browse/YARN-9272
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9272-YARN-8200.branch3.001.patch
>
>
> Need to port to YARN-8200.branch3 (for branch-3.0) and YARN-8200 (for 
> branch-2)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9272) Backport YARN-7738 for refreshing max allocation for multiple resource types

2019-03-21 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798597#comment-16798597
 ] 

Jonathan Hung commented on YARN-9272:
-

* TestNodeLabelContainerAllocation also fails locally pre-patch.
 * TestCapacityOverTimePolicy passes locally.
 * TestOpportunisticContainerAllocatorAMService and 
TestReservationSystemWithRMHA failures look related.

> Backport YARN-7738 for refreshing max allocation for multiple resource types
> 
>
> Key: YARN-9272
> URL: https://issues.apache.org/jira/browse/YARN-9272
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9272-YARN-8200.branch3.001.patch
>
>
> Need to port to YARN-8200.branch3 (for branch-3.0) and YARN-8200 (for 
> branch-2)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7129) Application Catalog for YARN applications

2019-03-21 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798582#comment-16798582
 ] 

Steve Loughran commented on YARN-7129:
--

* versions of artifacts in the webapp pom should be taken from the hadoop 
project uber-pom; so maintained in sync. mockito, is that central pom already, 
for example
* same for all the maven plugin versions. If they are new plugins, add the 
property to the hadoop-project jar and then reference it.
* Not reviewing the code, trusting you all there.

Is there a way to have some example which doesn't add large amounts of binary 
data? Because its going to make our repo even bigger, increase the time it 
takes to switch across branches slower, etc -stuff I do do regularly. Git isn't 
a place to keep binaries

> Application Catalog for YARN applications
> -
>
> Key: YARN-7129
> URL: https://issues.apache.org/jira/browse/YARN-7129
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN Appstore.pdf, YARN-7129.001.patch, 
> YARN-7129.002.patch, YARN-7129.003.patch, YARN-7129.004.patch, 
> YARN-7129.005.patch, YARN-7129.006.patch, YARN-7129.007.patch, 
> YARN-7129.008.patch, YARN-7129.009.patch, YARN-7129.010.patch, 
> YARN-7129.011.patch, YARN-7129.012.patch, YARN-7129.013.patch, 
> YARN-7129.014.patch, YARN-7129.015.patch, YARN-7129.016.patch, 
> YARN-7129.017.patch, YARN-7129.018.patch, YARN-7129.019.patch, 
> YARN-7129.020.patch, YARN-7129.021.patch, YARN-7129.022.patch, 
> YARN-7129.023.patch, YARN-7129.024.patch, YARN-7129.025.patch, 
> YARN-7129.026.patch, YARN-7129.027.patch, YARN-7129.028.patch, 
> YARN-7129.029.patch, YARN-7129.030.patch, YARN-7129.031.patch, 
> YARN-7129.032.patch
>
>
> YARN native services provides web services API to improve usability of 
> application deployment on Hadoop using collection of docker images.  It would 
> be nice to have an application catalog system which provides an editorial and 
> search interface for YARN applications.  This improves usability of YARN for 
> manage the life cycle of applications.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9292) Implement logic to keep docker image consistent in application that uses :latest tag

2019-03-21 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798561#comment-16798561
 ] 

Eric Yang commented on YARN-9292:
-

If yarn.nodemanager.runtime.linux.docker.image-update set to false:

Patch 4 will fail the application, if the docker image can not be resolved in 
the application master node.
Patch 5 will allow the application to proceed with :latest tag without 
synchronize them.

Behavior of patch 4 is more restricted, where patch 5 is a little more 
flexible.  Patch 4 behavior seems more correct to me, but I put patch 5 
behavior out there for feedback.

> Implement logic to keep docker image consistent in application that uses 
> :latest tag
> 
>
> Key: YARN-9292
> URL: https://issues.apache.org/jira/browse/YARN-9292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9292.001.patch, YARN-9292.002.patch, 
> YARN-9292.003.patch, YARN-9292.004.patch, YARN-9292.005.patch
>
>
> Docker image with latest tag can run in YARN cluster without any validation 
> in node managers. If a image with latest tag is changed during containers 
> launch. It might produce inconsistent results between nodes. This is surfaced 
> toward end of development for YARN-9184 to keep docker image consistent 
> within a job. One of the ideas to keep :latest tag consistent for a job, is 
> to use docker image command to figure out the image id and use image id to 
> propagate to rest of the container requests. There are some challenges to 
> overcome:
>  # The latest tag does not exist on the node where first container starts. 
> The first container will need to download the latest image, and find image 
> ID. This can introduce lag time for other containers to start.
>  # If image id is used to start other container, container-executor may have 
> problems to check if the image is coming from a trusted source. Both image 
> name and ID must be supply through .cmd file to container-executor. However, 
> hacker can supply incorrect image id and defeat container-executor security 
> checks.
> If we can over come those challenges, it maybe possible to keep docker image 
> consistent with one application.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9292) Implement logic to keep docker image consistent in application that uses :latest tag

2019-03-21 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-9292:

Attachment: YARN-9292.005.patch

> Implement logic to keep docker image consistent in application that uses 
> :latest tag
> 
>
> Key: YARN-9292
> URL: https://issues.apache.org/jira/browse/YARN-9292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9292.001.patch, YARN-9292.002.patch, 
> YARN-9292.003.patch, YARN-9292.004.patch, YARN-9292.005.patch
>
>
> Docker image with latest tag can run in YARN cluster without any validation 
> in node managers. If a image with latest tag is changed during containers 
> launch. It might produce inconsistent results between nodes. This is surfaced 
> toward end of development for YARN-9184 to keep docker image consistent 
> within a job. One of the ideas to keep :latest tag consistent for a job, is 
> to use docker image command to figure out the image id and use image id to 
> propagate to rest of the container requests. There are some challenges to 
> overcome:
>  # The latest tag does not exist on the node where first container starts. 
> The first container will need to download the latest image, and find image 
> ID. This can introduce lag time for other containers to start.
>  # If image id is used to start other container, container-executor may have 
> problems to check if the image is coming from a trusted source. Both image 
> name and ID must be supply through .cmd file to container-executor. However, 
> hacker can supply incorrect image id and defeat container-executor security 
> checks.
> If we can over come those challenges, it maybe possible to keep docker image 
> consistent with one application.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9272) Backport YARN-7738 for refreshing max allocation for multiple resource types

2019-03-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798550#comment-16798550
 ] 

Hadoop QA commented on YARN-9272:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-8200.branch3 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
55s{color} | {color:green} YARN-8200.branch3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
5s{color} | {color:green} YARN-8200.branch3 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} YARN-8200.branch3 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} YARN-8200.branch3 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
42s{color} | {color:green} YARN-8200.branch3 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} YARN-8200.branch3 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 19s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 17 new + 432 unchanged - 0 fixed = 449 total (was 432) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m  4s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}156m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
|   | 
hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService 
|
|   | hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA |
|   | hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:e402791 |
| JIRA Issue | YARN-9272 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963315/YARN-9272-YARN-8200.branch3.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  

[jira] [Commented] (YARN-7129) Application Catalog for YARN applications

2019-03-21 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798529#comment-16798529
 ] 

Eric Yang commented on YARN-7129:
-

Failed unit test is not related to patch 032.  [~ebadger] can you review if 
this unblock your development flow?

> Application Catalog for YARN applications
> -
>
> Key: YARN-7129
> URL: https://issues.apache.org/jira/browse/YARN-7129
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN Appstore.pdf, YARN-7129.001.patch, 
> YARN-7129.002.patch, YARN-7129.003.patch, YARN-7129.004.patch, 
> YARN-7129.005.patch, YARN-7129.006.patch, YARN-7129.007.patch, 
> YARN-7129.008.patch, YARN-7129.009.patch, YARN-7129.010.patch, 
> YARN-7129.011.patch, YARN-7129.012.patch, YARN-7129.013.patch, 
> YARN-7129.014.patch, YARN-7129.015.patch, YARN-7129.016.patch, 
> YARN-7129.017.patch, YARN-7129.018.patch, YARN-7129.019.patch, 
> YARN-7129.020.patch, YARN-7129.021.patch, YARN-7129.022.patch, 
> YARN-7129.023.patch, YARN-7129.024.patch, YARN-7129.025.patch, 
> YARN-7129.026.patch, YARN-7129.027.patch, YARN-7129.028.patch, 
> YARN-7129.029.patch, YARN-7129.030.patch, YARN-7129.031.patch, 
> YARN-7129.032.patch
>
>
> YARN native services provides web services API to improve usability of 
> application deployment on Hadoop using collection of docker images.  It would 
> be nice to have an application catalog system which provides an editorial and 
> search interface for YARN applications.  This improves usability of YARN for 
> manage the life cycle of applications.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7129) Application Catalog for YARN applications

2019-03-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798487#comment-16798487
 ] 

Hadoop QA commented on YARN-7129:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 50s{color} | {color:orange} root: The patch generated 3 new + 4 unchanged - 
0 fixed = 7 total (was 4) {color} |
| {color:green}+1{color} | {color:green} hadolint {color} | {color:green}  0m  
1s{color} | {color:green} There were no new hadolint issues. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:orange}-0{color} | {color:orange} shelldocs {color} | {color:orange}  
0m 13s{color} | {color:orange} The patch generated 136 new + 104 unchanged - 0 
fixed = 240 total (was 104) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
13s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-docker
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
20s{color} | {color:green} the patch passed {color} |
|| || || || 

[jira] [Commented] (YARN-9268) General improvements in FpgaDevice

2019-03-21 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798485#comment-16798485
 ] 

Peter Bacsko commented on YARN-9268:


Test failure seems to be unrelated.

> General improvements in FpgaDevice
> --
>
> Key: YARN-9268
> URL: https://issues.apache.org/jira/browse/YARN-9268
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9268-001.patch, YARN-9268-002.patch, 
> YARN-9268-003.patch, YARN-9268-004.patch, YARN-9268-005.patch
>
>
> Need to fix the following in the class {{FpgaDevice}}:
>  * It implements {{Comparable}}, but returns 0 in every case. There is no 
> natural ordering among FPGA devices, perhaps "acl0" comes before "acl1", but 
> this seems too forced and unnecessary.We think this class should not 
> implement {{Comparable}} at all, at least not like that.
>  * Stores unnecessary fields: devName, busNum, temperature, power usage. For 
> one, these are never needed in the code. Secondly, temp and power usage 
> changes constantly. It's pointless to store these in this POJO.
>  * {{serialVersionUID}} is 1L - let's generate a number for this
>  * Use {{int}} instead of {{Integer}} - don't allow nulls. If major/minor 
> uniquely identifies the card, then let's demand them in the constructor and 
> don't store Integers that can be null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9268) General improvements in FpgaDevice

2019-03-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798451#comment-16798451
 ] 

Hadoop QA commented on YARN-9268:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 42 unchanged - 7 fixed = 42 total (was 49) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 44s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9268 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963312/YARN-9268-005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a4d4136fe7d1 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 548997d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/23780/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23780/testReport/ |
| Max. process+thread count | 332 (vs. ulimit of 1) |
| modules 

[jira] [Comment Edited] (YARN-9272) Backport YARN-7738 for refreshing max allocation for multiple resource types

2019-03-21 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798423#comment-16798423
 ] 

Jonathan Hung edited comment on YARN-9272 at 3/21/19 9:09 PM:
--

Attached branch-3.0 version. A few diffs from YARN-7738:
 * Omit the vcores > Integer.MAX_VALUE check in Resource class (this is already 
handled by castToIntSafely in branch-3.0)
 * Add an extra constructor in ResourceInformation class (this was added as 
part of YARN-7254)
 * Add the {{loadNewConfiguration}} method in AdminService where 
resource-types.xml file is added to the conf which is called when refreshing 
scheduler. (This method was added as part of YARN-6124)
 * Add TestCapacitySchedulerWithMultiResourceTypes test class (this was added 
as part of YARN-7237). 

The branch-3.0 patch also applies cleanly on YARN-8200.


was (Author: jhung):
Attached branch-3.0 version. A few diffs from YARN-7738:
 * Omit the vcores > Integer.MAX_VALUE check in Resource class (this is already 
handled by castToIntSafely in branch-3.0)
 * Add an extra constructor in ResourceInformation class (this was added as 
part of YARN-7254)
 * Add the {{loadNewConfiguration}} method in AdminService where 
resource-types.xml file is added to the conf which is called when refreshing 
scheduler. (This method was added as part of YARN-6124)
 * Add TestCapacitySchedulerWithMultiResourceTypes test class (this was added 
as part of YARN-7237). 

> Backport YARN-7738 for refreshing max allocation for multiple resource types
> 
>
> Key: YARN-9272
> URL: https://issues.apache.org/jira/browse/YARN-9272
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9272-YARN-8200.branch3.001.patch
>
>
> Need to port to YARN-8200.branch3 (for branch-3.0) and YARN-8200 (for 
> branch-2)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7848) Force removal of docker containers that do not get removed on first try

2019-03-21 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798427#comment-16798427
 ] 

Eric Yang edited comment on YARN-7848 at 3/21/19 9:07 PM:
--

[~ebadger] {quote}Additionally, how would this work with debug-delay? If I want 
my image to stick around for awhile (or indefinitely) so that I can debug them, 
how will that co-exist with this periodic pruning?{quote}

I don't have good answers for system admin allowing image dump to stick around 
forever.  The clean up thread is optional and configurable.  The scheduling can 
be based on debug-delay to ensure image is being kept for debug delay window.  
This will only delete containers stuck in Created/Exited states after passing 
maximum of 2x debug-delay window.






was (Author: eyang):
[~ebadger] {quote}Additionally, how would this work with debug-delay? If I want 
my image to stick around for awhile (or indefinitely) so that I can debug them, 
how will that co-exist with this periodic pruning?{quote}

I don't have good answers for system admin allowing image dump to stick around 
forever.  The clean up thread is optional and configurable.  The scheduling can 
be based on debug-delay to ensure image is being kept for debug delay window.  
This will only delete containers stuck in Created/Exited states after passing 
debug-delay window.





> Force removal of docker containers that do not get removed on first try
> ---
>
> Key: YARN-7848
> URL: https://issues.apache.org/jira/browse/YARN-7848
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Zhaohui Xin
>Priority: Major
>  Labels: Docker
>
> After the addition of YARN-5366, containers will get removed after a certain 
> debug delay. However, this is a one-time effort. If the removal fails for 
> whatever reason, the container will persist. We need to add a mechanism for a 
> forced removal of those containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7848) Force removal of docker containers that do not get removed on first try

2019-03-21 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798427#comment-16798427
 ] 

Eric Yang commented on YARN-7848:
-

[~ebadger] {quote}Additionally, how would this work with debug-delay? If I want 
my image to stick around for awhile (or indefinitely) so that I can debug them, 
how will that co-exist with this periodic pruning?{quote}

I don't have good answers for system admin allowing image dump to stick around 
forever.  The clean up thread is optional and configurable.  The scheduling can 
be based on debug-delay to ensure image is being kept for debug delay window.  
This will only delete containers stuck in Created/Exited states after passing 
debug-delay window.





> Force removal of docker containers that do not get removed on first try
> ---
>
> Key: YARN-7848
> URL: https://issues.apache.org/jira/browse/YARN-7848
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Zhaohui Xin
>Priority: Major
>  Labels: Docker
>
> After the addition of YARN-5366, containers will get removed after a certain 
> debug delay. However, this is a one-time effort. If the removal fails for 
> whatever reason, the container will persist. We need to add a mechanism for a 
> forced removal of those containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9272) Backport YARN-7738 for refreshing max allocation for multiple resource types

2019-03-21 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798423#comment-16798423
 ] 

Jonathan Hung commented on YARN-9272:
-

Attached branch-3.0 version. A few diffs from YARN-7738:
 * Omit the vcores > Integer.MAX_VALUE check in Resource class (this is already 
handled by castToIntSafely in branch-3.0)
 * Add an extra constructor in ResourceInformation class (this was added as 
part of YARN-7254)
 * Add the {{loadNewConfiguration}} method in AdminService where 
resource-types.xml file is added to the conf which is called when refreshing 
scheduler. (This method was added as part of YARN-6124)
 * Add TestCapacitySchedulerWithMultiResourceTypes test class (this was added 
as part of YARN-7237). 

> Backport YARN-7738 for refreshing max allocation for multiple resource types
> 
>
> Key: YARN-9272
> URL: https://issues.apache.org/jira/browse/YARN-9272
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9272-YARN-8200.branch3.001.patch
>
>
> Need to port to YARN-8200.branch3 (for branch-3.0) and YARN-8200 (for 
> branch-2)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9272) Backport YARN-7738 for refreshing max allocation for multiple resource types

2019-03-21 Thread Jonathan Hung (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-9272:

Attachment: YARN-9272-YARN-8200.branch3.001.patch

> Backport YARN-7738 for refreshing max allocation for multiple resource types
> 
>
> Key: YARN-9272
> URL: https://issues.apache.org/jira/browse/YARN-9272
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9272-YARN-8200.branch3.001.patch
>
>
> Need to port to YARN-8200.branch3 (for branch-3.0) and YARN-8200 (for 
> branch-2)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5670) Add support for Docker image clean up

2019-03-21 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798409#comment-16798409
 ] 

Eric Yang commented on YARN-5670:
-

[~ebadger] How to solve the corner cases in option 1?  I don't see a path 
forward that can separate admin images from node manager pulled image in 
existing system.

> Add support for Docker image clean up
> -
>
> Key: YARN-5670
> URL: https://issues.apache.org/jira/browse/YARN-5670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Attachments: Localization Support For Docker Images_002.pdf
>
>
> Regarding to Docker image localization, we also need a way to clean up the 
> old/stale Docker image to save storage space. We may extend deletion service 
> to utilize "docker rm" to do this.
> This is related to YARN-3854 and may depend on its implementation. Please 
> refer to YARN-3854 for Docker image localization details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9268) General improvements in FpgaDevice

2019-03-21 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798394#comment-16798394
 ] 

Peter Bacsko commented on YARN-9268:


Test failures + checkstyle are fixed in patch v5.

> General improvements in FpgaDevice
> --
>
> Key: YARN-9268
> URL: https://issues.apache.org/jira/browse/YARN-9268
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9268-001.patch, YARN-9268-002.patch, 
> YARN-9268-003.patch, YARN-9268-004.patch, YARN-9268-005.patch
>
>
> Need to fix the following in the class {{FpgaDevice}}:
>  * It implements {{Comparable}}, but returns 0 in every case. There is no 
> natural ordering among FPGA devices, perhaps "acl0" comes before "acl1", but 
> this seems too forced and unnecessary.We think this class should not 
> implement {{Comparable}} at all, at least not like that.
>  * Stores unnecessary fields: devName, busNum, temperature, power usage. For 
> one, these are never needed in the code. Secondly, temp and power usage 
> changes constantly. It's pointless to store these in this POJO.
>  * {{serialVersionUID}} is 1L - let's generate a number for this
>  * Use {{int}} instead of {{Integer}} - don't allow nulls. If major/minor 
> uniquely identifies the card, then let's demand them in the constructor and 
> don't store Integers that can be null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9268) General improvements in FpgaDevice

2019-03-21 Thread Peter Bacsko (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-9268:
---
Attachment: YARN-9268-005.patch

> General improvements in FpgaDevice
> --
>
> Key: YARN-9268
> URL: https://issues.apache.org/jira/browse/YARN-9268
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9268-001.patch, YARN-9268-002.patch, 
> YARN-9268-003.patch, YARN-9268-004.patch, YARN-9268-005.patch
>
>
> Need to fix the following in the class {{FpgaDevice}}:
>  * It implements {{Comparable}}, but returns 0 in every case. There is no 
> natural ordering among FPGA devices, perhaps "acl0" comes before "acl1", but 
> this seems too forced and unnecessary.We think this class should not 
> implement {{Comparable}} at all, at least not like that.
>  * Stores unnecessary fields: devName, busNum, temperature, power usage. For 
> one, these are never needed in the code. Secondly, temp and power usage 
> changes constantly. It's pointless to store these in this POJO.
>  * {{serialVersionUID}} is 1L - let's generate a number for this
>  * Use {{int}} instead of {{Integer}} - don't allow nulls. If major/minor 
> uniquely identifies the card, then let's demand them in the constructor and 
> don't store Integers that can be null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5670) Add support for Docker image clean up

2019-03-21 Thread Eric Badger (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798391#comment-16798391
 ] 

Eric Badger commented on YARN-5670:
---

As I said in [this 
comment|https://issues.apache.org/jira/browse/YARN-7848?focusedCommentId=16798387=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16798387]
 on YARN-7848, I don't believe we should be managing docker on the node with 
the assumption that we can completely control it. I am not comfortable with the 
nodemanager removing images that it did not put there. In option 2, we would be 
deleting any image regardless of its origin. 

> Add support for Docker image clean up
> -
>
> Key: YARN-5670
> URL: https://issues.apache.org/jira/browse/YARN-5670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Attachments: Localization Support For Docker Images_002.pdf
>
>
> Regarding to Docker image localization, we also need a way to clean up the 
> old/stale Docker image to save storage space. We may extend deletion service 
> to utilize "docker rm" to do this.
> This is related to YARN-3854 and may depend on its implementation. Please 
> refer to YARN-3854 for Docker image localization details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7848) Force removal of docker containers that do not get removed on first try

2019-03-21 Thread Eric Badger (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798387#comment-16798387
 ] 

Eric Badger commented on YARN-7848:
---

I don't like this idea. This is the Nodemanager completely taking over docker. 
The NM should use docker, but I don't think it should be assumed that it is the 
only thing that can use docker. To me, this seems like something that should be 
handled at an ops level in a cron job, if they want to make sure all images are 
gone. I'm ok with having a more generalized case of pruning images 
periodically, but I'm not comfortable with the NM pruning containers that it 
didn't start.

Additionally, how would this work with debug-delay? If I want my image to stick 
around for awhile (or indefinitely) so that I can debug them, how will that 
co-exist with this periodic pruning? 

cc [~shaneku...@gmail.com]

> Force removal of docker containers that do not get removed on first try
> ---
>
> Key: YARN-7848
> URL: https://issues.apache.org/jira/browse/YARN-7848
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Zhaohui Xin
>Priority: Major
>  Labels: Docker
>
> After the addition of YARN-5366, containers will get removed after a certain 
> debug delay. However, this is a one-time effort. If the removal fails for 
> whatever reason, the container will persist. We need to add a mechanism for a 
> forced removal of those containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9268) General improvements in FpgaDevice

2019-03-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798383#comment-16798383
 ] 

Hadoop QA commented on YARN-9268:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 43 unchanged - 7 fixed = 44 total (was 50) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 31s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.linux.resources.fpga.TestFpgaResourceHandler
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9268 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962959/YARN-9268-004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1972a981cc7f 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a99eb80 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/23779/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 

[jira] [Updated] (YARN-9402) Opportunistic containers should not be scheduled on Decommissioning nodes.

2019-03-21 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-9402:
---
Fix Version/s: 3.3.0

> Opportunistic containers should not be scheduled on Decommissioning nodes.
> --
>
> Key: YARN-9402
> URL: https://issues.apache.org/jira/browse/YARN-9402
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9402.001.patch
>
>
> Right now, opportunistic containers can get scheduled on Decommissioning 
> nodes which we are draining and thus can lead to killing of those containers 
> when node is decommissioned. As part of this jira, we will skip allocation of 
> opportunistic containers on Decommissioning nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9402) Opportunistic containers should not be scheduled on Decommissioning nodes.

2019-03-21 Thread Abhishek Modi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798355#comment-16798355
 ] 

Abhishek Modi commented on YARN-9402:
-

Thanks [~giovanni.fumarola] for review and committing it.

> Opportunistic containers should not be scheduled on Decommissioning nodes.
> --
>
> Key: YARN-9402
> URL: https://issues.apache.org/jira/browse/YARN-9402
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9402.001.patch
>
>
> Right now, opportunistic containers can get scheduled on Decommissioning 
> nodes which we are draining and thus can lead to killing of those containers 
> when node is decommissioned. As part of this jira, we will skip allocation of 
> opportunistic containers on Decommissioning nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9292) Implement logic to keep docker image consistent in application that uses :latest tag

2019-03-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798356#comment-16798356
 ] 

Hadoop QA commented on YARN-9292:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 29 unchanged - 3 fixed = 29 total (was 32) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
47s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
44s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9292 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963300/YARN-9292.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 1c06d96bc04a 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9f1c017 |
| maven | 

[jira] [Updated] (YARN-9378) Create Image Localizer

2019-03-21 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-9378:

Description: 
Refer YARN-3854. 

Add Docker Image Localizer. The image localizer is part of 
{{ResourceLocalizationService}}. It serves the following purpose:

1. All image localization requests will be served by image localizer.
2. Image localizer initially runs {{DockerImagesCommand}} to find all images on 
the local node.
3. For an image localization request, it executes {{DockerPullCommand}} if the 
image is not present on the local node.
4. It returns the status of image localization by periodically executing 
{{DockerImagesCommand}} on a particular image. 

{{LinuxContainerExecutor}} is for container operations. DockerImagesCommand is 
independent of any container. The image localizer acts as a service that will 
localize docker images and maintain an image cache. Other components can use 
this to query about the images on the node.

  was:{{LinuxContainerExecutor}} is for container operations. 
DockerImagesCommand is independent of any container. The image localizer acts 
as a service that will localize docker images and maintain an image cache. 
Other components can use this to query about the images on the node.


> Create Image Localizer
> --
>
> Key: YARN-9378
> URL: https://issues.apache.org/jira/browse/YARN-9378
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-9378.001.patch
>
>
> Refer YARN-3854. 
> Add Docker Image Localizer. The image localizer is part of 
> {{ResourceLocalizationService}}. It serves the following purpose:
> 1. All image localization requests will be served by image localizer.
> 2. Image localizer initially runs {{DockerImagesCommand}} to find all images 
> on the local node.
> 3. For an image localization request, it executes {{DockerPullCommand}} if 
> the image is not present on the local node.
> 4. It returns the status of image localization by periodically executing 
> {{DockerImagesCommand}} on a particular image. 
> {{LinuxContainerExecutor}} is for container operations. DockerImagesCommand 
> is independent of any container. The image localizer acts as a service that 
> will localize docker images and maintain an image cache. Other components can 
> use this to query about the images on the node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9402) Opportunistic containers should not be scheduled on Decommissioning nodes.

2019-03-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798352#comment-16798352
 ] 

Hudson commented on YARN-9402:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16257 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16257/])
YARN-9402. Opportunistic containers should not be scheduled on (gifuma: rev 
548997d6c9c5a1b9734ee00d065ce48a189458e6)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/distributed/TestNodeQueueLoadMonitor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/distributed/NodeQueueLoadMonitor.java


> Opportunistic containers should not be scheduled on Decommissioning nodes.
> --
>
> Key: YARN-9402
> URL: https://issues.apache.org/jira/browse/YARN-9402
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9402.001.patch
>
>
> Right now, opportunistic containers can get scheduled on Decommissioning 
> nodes which we are draining and thus can lead to killing of those containers 
> when node is decommissioned. As part of this jira, we will skip allocation of 
> opportunistic containers on Decommissioning nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9402) Opportunistic containers should not be scheduled on Decommissioning nodes.

2019-03-21 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798350#comment-16798350
 ] 

Giovanni Matteo Fumarola commented on YARN-9402:


Thanks [~abmodi]. +1 
Committed to trunk.

> Opportunistic containers should not be scheduled on Decommissioning nodes.
> --
>
> Key: YARN-9402
> URL: https://issues.apache.org/jira/browse/YARN-9402
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9402.001.patch
>
>
> Right now, opportunistic containers can get scheduled on Decommissioning 
> nodes which we are draining and thus can lead to killing of those containers 
> when node is decommissioned. As part of this jira, we will skip allocation of 
> opportunistic containers on Decommissioning nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-9249) Add support for docker image localization

2019-03-21 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh resolved YARN-9249.
-
Resolution: Duplicate

Duplicate of https://issues.apache.org/jira/browse/YARN-9378

> Add support for docker image localization
> -
>
> Key: YARN-9249
> URL: https://issues.apache.org/jira/browse/YARN-9249
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: Docker
>
> Refer YARN-3854. 
> Add Docker Image Localizer. The image localizer is part of 
> {{ResourceLocalizationService}}. It serves the following purpose:
> 1. All image localization requests will be served by image localizer.
> 2. Image localizer initially runs {{DockerImagesCommand}} to find all images 
> on the local node.
> 3. For an image localization request, it executes {{DockerPullCommand}} if 
> the image is not present on the local node.
> 4. It returns the status of image localization by periodically executing 
> {{DockerImagesCommand}} on a particular image. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9267) General improvements in FpgaResourceHandlerImpl

2019-03-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798323#comment-16798323
 ] 

Hudson commented on YARN-9267:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16256 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16256/])
YARN-9267. General improvements in FpgaResourceHandlerImpl. Contributed 
(devaraj: rev a99eb80659835107f4015c859b3319bf3a70c281)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/FpgaResourceAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/FpgaResourceHandlerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/TestFpgaResourceHandler.java


> General improvements in FpgaResourceHandlerImpl
> ---
>
> Key: YARN-9267
> URL: https://issues.apache.org/jira/browse/YARN-9267
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9267-001.patch, YARN-9267-002.patch, 
> YARN-9267-003.patch, YARN-9267-004.patch, YARN-9267-005.patch, 
> YARN-9267-006.patch, YARN-9267-007.patch, YARN-9267-008.patch, 
> YARN-9267-009.patch, YARN-9267-010.patch
>
>
> Fix some problems in {{FpgaResourceHandlerImpl}}:
>  * {{preStart()}} does not reconfigure card with the same IP - we see it as a 
> problem. If you recompile the FPGA application, you must rename the aocx file 
> because the card will not be reprogrammed. Suggestion: instead of storing 
> Node<\->IPID mapping, store Node<\->IPID hash (like the SHA-256 of the 
> localized file).
>  * Switch to slf4j from Apache Commons Logging
>  * Some unused imports



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9267) General improvements in FpgaResourceHandlerImpl

2019-03-21 Thread Devaraj K (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798312#comment-16798312
 ] 

Devaraj K commented on YARN-9267:
-

+1, latest patch looks good to me, committing it shortly.

> General improvements in FpgaResourceHandlerImpl
> ---
>
> Key: YARN-9267
> URL: https://issues.apache.org/jira/browse/YARN-9267
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9267-001.patch, YARN-9267-002.patch, 
> YARN-9267-003.patch, YARN-9267-004.patch, YARN-9267-005.patch, 
> YARN-9267-006.patch, YARN-9267-007.patch, YARN-9267-008.patch, 
> YARN-9267-009.patch, YARN-9267-010.patch
>
>
> Fix some problems in {{FpgaResourceHandlerImpl}}:
>  * {{preStart()}} does not reconfigure card with the same IP - we see it as a 
> problem. If you recompile the FPGA application, you must rename the aocx file 
> because the card will not be reprogrammed. Suggestion: instead of storing 
> Node<\->IPID mapping, store Node<\->IPID hash (like the SHA-256 of the 
> localized file).
>  * Switch to slf4j from Apache Commons Logging
>  * Some unused imports



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9267) General improvements in FpgaResourceHandlerImpl

2019-03-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798298#comment-16798298
 ] 

Hadoop QA commented on YARN-9267:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 46 unchanged - 6 fixed = 46 total (was 52) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
21s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9267 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963297/YARN-9267-010.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 56e620bb54ca 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9f1c017 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23776/testReport/ |
| Max. process+thread count | 329 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/23776/console |
| Powered by | Apache 

[jira] [Updated] (YARN-7129) Application Catalog for YARN applications

2019-03-21 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7129:

Attachment: YARN-7129.032.patch

> Application Catalog for YARN applications
> -
>
> Key: YARN-7129
> URL: https://issues.apache.org/jira/browse/YARN-7129
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: applications
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN Appstore.pdf, YARN-7129.001.patch, 
> YARN-7129.002.patch, YARN-7129.003.patch, YARN-7129.004.patch, 
> YARN-7129.005.patch, YARN-7129.006.patch, YARN-7129.007.patch, 
> YARN-7129.008.patch, YARN-7129.009.patch, YARN-7129.010.patch, 
> YARN-7129.011.patch, YARN-7129.012.patch, YARN-7129.013.patch, 
> YARN-7129.014.patch, YARN-7129.015.patch, YARN-7129.016.patch, 
> YARN-7129.017.patch, YARN-7129.018.patch, YARN-7129.019.patch, 
> YARN-7129.020.patch, YARN-7129.021.patch, YARN-7129.022.patch, 
> YARN-7129.023.patch, YARN-7129.024.patch, YARN-7129.025.patch, 
> YARN-7129.026.patch, YARN-7129.027.patch, YARN-7129.028.patch, 
> YARN-7129.029.patch, YARN-7129.030.patch, YARN-7129.031.patch, 
> YARN-7129.032.patch
>
>
> YARN native services provides web services API to improve usability of 
> application deployment on Hadoop using collection of docker images.  It would 
> be nice to have an application catalog system which provides an editorial and 
> search interface for YARN applications.  This improves usability of YARN for 
> manage the life cycle of applications.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9292) Implement logic to keep docker image consistent in application that uses :latest tag

2019-03-21 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-9292:

Attachment: YARN-9292.004.patch

> Implement logic to keep docker image consistent in application that uses 
> :latest tag
> 
>
> Key: YARN-9292
> URL: https://issues.apache.org/jira/browse/YARN-9292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9292.001.patch, YARN-9292.002.patch, 
> YARN-9292.003.patch, YARN-9292.004.patch
>
>
> Docker image with latest tag can run in YARN cluster without any validation 
> in node managers. If a image with latest tag is changed during containers 
> launch. It might produce inconsistent results between nodes. This is surfaced 
> toward end of development for YARN-9184 to keep docker image consistent 
> within a job. One of the ideas to keep :latest tag consistent for a job, is 
> to use docker image command to figure out the image id and use image id to 
> propagate to rest of the container requests. There are some challenges to 
> overcome:
>  # The latest tag does not exist on the node where first container starts. 
> The first container will need to download the latest image, and find image 
> ID. This can introduce lag time for other containers to start.
>  # If image id is used to start other container, container-executor may have 
> problems to check if the image is coming from a trusted source. Both image 
> name and ID must be supply through .cmd file to container-executor. However, 
> hacker can supply incorrect image id and defeat container-executor security 
> checks.
> If we can over come those challenges, it maybe possible to keep docker image 
> consistent with one application.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5670) Add support for Docker image clean up

2019-03-21 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798241#comment-16798241
 ] 

Eric Yang commented on YARN-5670:
-

{quote}The cache can be backed by NMStateStore so even when the NM comes back 
or is restarted, it will know what images it localized.{quote}

The problem is not related to LRU is persisted by NMStateStore.  The problem is 
related to docker image tags are moving targets.  Let's consider that node 
manager tracks images by name and tags combo.  centos:latest has digest id: 
123.  This image was used yesterday.  The image is updated to digest id: 234 
today.  When NM delete centos:latest tomorrow because it has not been in use 
for 24 hours.  Image with digest id: 123 will not be deleted because it is no 
longer associated with the same name from two days ago.

Let's take another view, if image is tracked by digest id by NM.  System admin 
tagged centos:latest (digest id: 123) to private_image:my_version.  He is 
hoping that no one will delete his image.  A job started with centos:latest, 
and resolved to digest id: 123.  Centos:latest updated to digest id: 234 by 
another job a few hours later.  24 hours later, private_image:my_version is 
deleted by digest id 123 clean up job because there is only 
private_image:my_version referenced with digest id 123.

Hadoop 3.1.x and 3.2.x don't have clean up ability.  Therefore, dangling images 
are already accumulating in production systems.  There is no way to identify 
images that were pulled by the system or placed by admin, this makes option 1 
less attractive to implement because it can not reach the desired clean state 
without undesired side effects above.

Option 2 is safer to implement in Hadoop because giving system admin an option 
to turn on.  They can be more prepared in their internal infrastructure setup 
to be less of a one off and reach the same definition of clean state that 
[Docker swarm uses with system prune|https://github.com/moby/moby/issues/31254].

> Add support for Docker image clean up
> -
>
> Key: YARN-5670
> URL: https://issues.apache.org/jira/browse/YARN-5670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Attachments: Localization Support For Docker Images_002.pdf
>
>
> Regarding to Docker image localization, we also need a way to clean up the 
> old/stale Docker image to save storage space. We may extend deletion service 
> to utilize "docker rm" to do this.
> This is related to YARN-3854 and may depend on its implementation. Please 
> refer to YARN-3854 for Docker image localization details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9267) General improvements in FpgaResourceHandlerImpl

2019-03-21 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798235#comment-16798235
 ] 

Peter Bacsko commented on YARN-9267:


Ah sure, didn't see that. Will fix it in a minute.

> General improvements in FpgaResourceHandlerImpl
> ---
>
> Key: YARN-9267
> URL: https://issues.apache.org/jira/browse/YARN-9267
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9267-001.patch, YARN-9267-002.patch, 
> YARN-9267-003.patch, YARN-9267-004.patch, YARN-9267-005.patch, 
> YARN-9267-006.patch, YARN-9267-007.patch, YARN-9267-008.patch, 
> YARN-9267-009.patch
>
>
> Fix some problems in {{FpgaResourceHandlerImpl}}:
>  * {{preStart()}} does not reconfigure card with the same IP - we see it as a 
> problem. If you recompile the FPGA application, you must rename the aocx file 
> because the card will not be reprogrammed. Suggestion: instead of storing 
> Node<\->IPID mapping, store Node<\->IPID hash (like the SHA-256 of the 
> localized file).
>  * Switch to slf4j from Apache Commons Logging
>  * Some unused imports



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9267) General improvements in FpgaResourceHandlerImpl

2019-03-21 Thread Peter Bacsko (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-9267:
---
Attachment: YARN-9267-010.patch

> General improvements in FpgaResourceHandlerImpl
> ---
>
> Key: YARN-9267
> URL: https://issues.apache.org/jira/browse/YARN-9267
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9267-001.patch, YARN-9267-002.patch, 
> YARN-9267-003.patch, YARN-9267-004.patch, YARN-9267-005.patch, 
> YARN-9267-006.patch, YARN-9267-007.patch, YARN-9267-008.patch, 
> YARN-9267-009.patch, YARN-9267-010.patch
>
>
> Fix some problems in {{FpgaResourceHandlerImpl}}:
>  * {{preStart()}} does not reconfigure card with the same IP - we see it as a 
> problem. If you recompile the FPGA application, you must rename the aocx file 
> because the card will not be reprogrammed. Suggestion: instead of storing 
> Node<\->IPID mapping, store Node<\->IPID hash (like the SHA-256 of the 
> localized file).
>  * Switch to slf4j from Apache Commons Logging
>  * Some unused imports



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9267) General improvements in FpgaResourceHandlerImpl

2019-03-21 Thread Devaraj K (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798229#comment-16798229
 ] 

Devaraj K commented on YARN-9267:
-

Thanks [~pbacsko] for updating the patch, can you also take care of this 
checkstyle?
{code}
-0  checkstyle  0m 23s  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 46 unchanged - 6 fixed = 47 total (was 52)
{code}

{code}
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/TestFpgaResourceHandler.java:322:
  throws ResourceHandlerException, PrivilegedOperationException, 
IOException {: Line is longer than 80 characters (found 82). [LineLength]
{code}

> General improvements in FpgaResourceHandlerImpl
> ---
>
> Key: YARN-9267
> URL: https://issues.apache.org/jira/browse/YARN-9267
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9267-001.patch, YARN-9267-002.patch, 
> YARN-9267-003.patch, YARN-9267-004.patch, YARN-9267-005.patch, 
> YARN-9267-006.patch, YARN-9267-007.patch, YARN-9267-008.patch, 
> YARN-9267-009.patch
>
>
> Fix some problems in {{FpgaResourceHandlerImpl}}:
>  * {{preStart()}} does not reconfigure card with the same IP - we see it as a 
> problem. If you recompile the FPGA application, you must rename the aocx file 
> because the card will not be reprogrammed. Suggestion: instead of storing 
> Node<\->IPID mapping, store Node<\->IPID hash (like the SHA-256 of the 
> localized file).
>  * Switch to slf4j from Apache Commons Logging
>  * Some unused imports



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9267) General improvements in FpgaResourceHandlerImpl

2019-03-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798016#comment-16798016
 ] 

Hadoop QA commented on YARN-9267:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 46 unchanged - 6 fixed = 47 total (was 52) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
43s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9267 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963251/YARN-9267-009.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 762f75a87a2a 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 60cdd4c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/23775/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23775/testReport/ |
| Max. process+thread count | 305 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 

[jira] [Updated] (YARN-9267) General improvements in FpgaResourceHandlerImpl

2019-03-21 Thread Peter Bacsko (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-9267:
---
Attachment: YARN-9267-009.patch

> General improvements in FpgaResourceHandlerImpl
> ---
>
> Key: YARN-9267
> URL: https://issues.apache.org/jira/browse/YARN-9267
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9267-001.patch, YARN-9267-002.patch, 
> YARN-9267-003.patch, YARN-9267-004.patch, YARN-9267-005.patch, 
> YARN-9267-006.patch, YARN-9267-007.patch, YARN-9267-008.patch, 
> YARN-9267-009.patch
>
>
> Fix some problems in {{FpgaResourceHandlerImpl}}:
>  * {{preStart()}} does not reconfigure card with the same IP - we see it as a 
> problem. If you recompile the FPGA application, you must rename the aocx file 
> because the card will not be reprogrammed. Suggestion: instead of storing 
> Node<\->IPID mapping, store Node<\->IPID hash (like the SHA-256 of the 
> localized file).
>  * Switch to slf4j from Apache Commons Logging
>  * Some unused imports



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9401) Fix `yarn version` print the version info is the same as `hadoop version`

2019-03-21 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797946#comment-16797946
 ] 

Wanqiang Ji commented on YARN-9401:
---

Hi, [~eyang]

Can you help to review this?

> Fix `yarn version` print the version info is the same as `hadoop version`
> -
>
> Key: YARN-9401
> URL: https://issues.apache.org/jira/browse/YARN-9401
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Minor
> Attachments: YARN-9401.001.patch, YARN-9401.002.patch
>
>
> It's caused by in `yarn` shell used `org.apache.hadoop.util.VersionInfo` 
> instead of `org.apache.hadoop.yarn.util.YarnVersionInfo` as the 
> `HADOOP_CLASSNAME` by mistake.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8967) Change FairScheduler to use PlacementRule interface

2019-03-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797926#comment-16797926
 ] 

Hadoop QA commented on YARN-8967:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 16 unchanged - 1 fixed = 16 total (was 17) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 339 unchanged - 67 fixed = 341 total (was 406) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m  5s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
57s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-8967 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963223/YARN-8967.010.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d3f891b00030 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 60cdd4c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (YARN-8967) Change FairScheduler to use PlacementRule interface

2019-03-21 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797873#comment-16797873
 ] 

Wilfred Spiegelenburg commented on YARN-8967:
-

1) I missed that one too, fixed now
3) The two for loops run over different lists. Take this example:
{code}

  
  
   
  

{code}
The first for loop run over the top level list of nodes (entries: specified and 
nestedQueue). The second loop runs over the children of each entry in that 
list. You cannot see the children of the top level nodes until you call 
{{getChildren()}} on it. For that you need to cast the Node to an Element. I 
thus cannot collapse the loop into one loop. The list also does not have an 
iterator to change it to a for-each construct. The xml files also return child 
Nodes that are not of the Element type for a correct configuration which means 
we have to filter while traversing the list.
4) I added the same test case. We now handle that case and having no parent 
rule for the nestedUserQueue correctly.
I have great difficulty removing the create and init for the first rule as I 
don't know at the point that I find the first rule that I am going to find a 
second one. I would need to wait until after the loop to create/init which 
makes the code even more complex.
5) I had that to start with and changed it because the IDE kept complaining. 
Not sure why but it works now without complaints and without the getter 
methods. I might have had slightly different access modifiers. Looks far more 
like a wrapper class now.

I also found that we do not correctly test and handle the cases which have 
entries which are not _rules_. I updated the test cases and found that we had a 
possible NPE due to the way we process the policy. These ones are covered in 
{{testBrokenRules()}} and the updated tests in 
{{testNestedUserQueueParsingErrors()}}

> Change FairScheduler to use PlacementRule interface
> ---
>
> Key: YARN-8967
> URL: https://issues.apache.org/jira/browse/YARN-8967
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler, fairscheduler
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-8967.001.patch, YARN-8967.002.patch, 
> YARN-8967.003.patch, YARN-8967.004.patch, YARN-8967.005.patch, 
> YARN-8967.006.patch, YARN-8967.007.patch, YARN-8967.008.patch, 
> YARN-8967.009.patch, YARN-8967.010.patch
>
>
> The PlacementRule interface was introduced to be used by all schedulers as 
> per YARN-3635. The CapacityScheduler is using it but the FairScheduler is not 
> and is using its own rule definition.
> YARN-8948 cleans up the implementation and removes the CS references which 
> should allow this change to go through.
> This would be the first step in using one placement rule engine for both 
> schedulers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8967) Change FairScheduler to use PlacementRule interface

2019-03-21 Thread Wilfred Spiegelenburg (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg updated YARN-8967:

Attachment: YARN-8967.010.patch

> Change FairScheduler to use PlacementRule interface
> ---
>
> Key: YARN-8967
> URL: https://issues.apache.org/jira/browse/YARN-8967
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler, fairscheduler
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-8967.001.patch, YARN-8967.002.patch, 
> YARN-8967.003.patch, YARN-8967.004.patch, YARN-8967.005.patch, 
> YARN-8967.006.patch, YARN-8967.007.patch, YARN-8967.008.patch, 
> YARN-8967.009.patch, YARN-8967.010.patch
>
>
> The PlacementRule interface was introduced to be used by all schedulers as 
> per YARN-3635. The CapacityScheduler is using it but the FairScheduler is not 
> and is using its own rule definition.
> YARN-8948 cleans up the implementation and removes the CS references which 
> should allow this change to go through.
> This would be the first step in using one placement rule engine for both 
> schedulers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5670) Add support for Docker image clean up

2019-03-21 Thread Chandni Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797855#comment-16797855
 ] 

Chandni Singh commented on YARN-5670:
-

{quote}
it is possible that some images tracking are lost from LRU and result in 
dangling images over time. 
{quote}
The cache can be backed by NMStateStore so even when the NM comes back or is 
restarted, it will know what images it localized.

The reason I am against using {{docker image prune}} is because there can be 
multiple images on that node which an admin may have pulled explicitly or some 
other process may have downloaded. Now, even if they don't use it within the 
last {{24 h}} or whatever time we have configured for the NM, the NM should not 
be the one deciding to remove that image. It is surprising for the admin/other 
process that the image they pulled is being mysteriously deleted.

> Add support for Docker image clean up
> -
>
> Key: YARN-5670
> URL: https://issues.apache.org/jira/browse/YARN-5670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Attachments: Localization Support For Docker Images_002.pdf
>
>
> Regarding to Docker image localization, we also need a way to clean up the 
> old/stale Docker image to save storage space. We may extend deletion service 
> to utilize "docker rm" to do this.
> This is related to YARN-3854 and may depend on its implementation. Please 
> refer to YARN-3854 for Docker image localization details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org