[jira] [Commented] (YARN-9283) Javadoc of LinuxContainerExecutor#addSchedPriorityCommand has a wrong property name as reference

2019-02-14 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769043#comment-16769043
 ] 

Akira Ajisaka commented on YARN-9283:
-

Would you try
{code}
{@link YarnConfiguration#NM_CONTAINER_EXECUTOR_SCHED_PRIORITY}
{code}
instead of writing the property directly?

> Javadoc of LinuxContainerExecutor#addSchedPriorityCommand has a wrong 
> property name as reference
> 
>
> Key: YARN-9283
> URL: https://issues.apache.org/jira/browse/YARN-9283
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.2.0
>Reporter: Szilard Nemeth
>Assignee: Adam Antal
>Priority: Minor
>  Labels: newbie, newbie++
> Attachments: YARN-9283.000.patch
>
>
> The javadoc of LinuxContainerExecutor#addSchedPriorityCommand tries to refer 
> to the property 
> org.apache.hadoop.yarn.conf.YarnConfiguration#NM_CONTAINER_EXECUTOR_SCHED_PRIORITY
> which has the value: 
> "yarn.nodemanager.container-executor.os.sched.priority.adjustment" but the 
> javadoc contains the value: 
> "yarn.nodemanager.container-executer.os.sched.prioity" which is incorrect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9213) RM Web UI v1 does not show custom resource allocations for containers page

2019-02-14 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768987#comment-16768987
 ] 

Szilard Nemeth edited comment on YARN-9213 at 2/15/19 7:19 AM:
---

Hi [~sunilg] ! 
This kind of result I received only after I uploaded the screenshots. Will 
check soon if this patch still applies to trunk well.


was (Author: snemeth):
Hi [~sunilg] ! 
Rhis kind of result I received only after I uploaded the screenshots. Will 
check soo  if this patch still applies to trunk well

> RM Web UI v1 does not show custom resource allocations for containers page
> --
>
> Key: YARN-9213
> URL: https://issues.apache.org/jira/browse/YARN-9213
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: Screen Shot 2019-02-08 at 21.16.37-before.png, Screen 
> Shot 2019-02-09 at 9.55.16-after.png, YARN-9213.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6735) Have a way to turn off container metrics from NMs

2019-02-14 Thread Abhishek Modi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-6735:

Labels: atsv2  (was: )

> Have a way to turn off container metrics from NMs
> -
>
> Key: YARN-6735
> URL: https://issues.apache.org/jira/browse/YARN-6735
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Abhishek Modi
>Priority: Major
>  Labels: atsv2
> Fix For: 3.3.0
>
> Attachments: YARN-6735.001.patch, YARN-6735.002.patch, 
> YARN-6735.003.patch
>
>
> Have a way to turn off emitting system metrics from NMs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9213) RM Web UI v1 does not show custom resource allocations for containers page

2019-02-14 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769025#comment-16769025
 ] 

Szilard Nemeth commented on YARN-9213:
--

Hi [~sunilg]!
Sure, I uploaded the same patch as patch002.
I did not know the build is so dump that it checks the latest files (without 
extension) :S 

> RM Web UI v1 does not show custom resource allocations for containers page
> --
>
> Key: YARN-9213
> URL: https://issues.apache.org/jira/browse/YARN-9213
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: Screen Shot 2019-02-08 at 21.16.37-before.png, Screen 
> Shot 2019-02-09 at 9.55.16-after.png, YARN-9213.001.patch, YARN-9213.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9213) RM Web UI v1 does not show custom resource allocations for containers page

2019-02-14 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-9213:
-
Attachment: YARN-9213.002.patch

> RM Web UI v1 does not show custom resource allocations for containers page
> --
>
> Key: YARN-9213
> URL: https://issues.apache.org/jira/browse/YARN-9213
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: Screen Shot 2019-02-08 at 21.16.37-before.png, Screen 
> Shot 2019-02-09 at 9.55.16-after.png, YARN-9213.001.patch, YARN-9213.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9213) RM Web UI v1 does not show custom resource allocations for containers page

2019-02-14 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769020#comment-16769020
 ] 

Sunil Govindan commented on YARN-9213:
--

Seems jenkins is pulling the images as it has the latest timestamp. Reattach 
the same patch again will solve the pblm. [~snemeth] could u pls help on that.

> RM Web UI v1 does not show custom resource allocations for containers page
> --
>
> Key: YARN-9213
> URL: https://issues.apache.org/jira/browse/YARN-9213
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: Screen Shot 2019-02-08 at 21.16.37-before.png, Screen 
> Shot 2019-02-09 at 9.55.16-after.png, YARN-9213.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8295) [UI2] Improve "Resource Usage" tab error message when there are no data available.

2019-02-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769018#comment-16769018
 ] 

Hudson commented on YARN-8295:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15968 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15968/])
YARN-8295. [UI2] Improve Resource Usage tab error message when there are 
(sunilg: rev 5b55f3538cb27baf8ac08568a5f752423e7b29a4)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app/charts.hbs


> [UI2] Improve "Resource Usage" tab error message when there are no data 
> available.
> --
>
> Key: YARN-8295
> URL: https://issues.apache.org/jira/browse/YARN-8295
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Reporter: Gergely Novák
>Assignee: Charan Hebri
>Priority: Minor
> Attachments: YARN-8295.001.patch
>
>
> If the user goes to Applications -> app -> Resource Usage for a finished 
> application, they get this message: "No resource usage data is available for 
> this application!". 
> I think it would be better to hide this tab for finished applications, or at 
> least add something like "this application is not using any resources because 
> it is finished" to the message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8295) [UI2] Improve "Resource Usage" tab error message when there are no data available.

2019-02-14 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8295:
-
Summary: [UI2] Improve "Resource Usage" tab error message when there are no 
data available.  (was: [UI2] The "Resource Usage" tab is pointless for finished 
applications)

> [UI2] Improve "Resource Usage" tab error message when there are no data 
> available.
> --
>
> Key: YARN-8295
> URL: https://issues.apache.org/jira/browse/YARN-8295
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Reporter: Gergely Novák
>Assignee: Charan Hebri
>Priority: Minor
> Attachments: YARN-8295.001.patch
>
>
> If the user goes to Applications -> app -> Resource Usage for a finished 
> application, they get this message: "No resource usage data is available for 
> this application!". 
> I think it would be better to hide this tab for finished applications, or at 
> least add something like "this application is not using any resources because 
> it is finished" to the message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9284) Fix the unit of yarn.service.am-resource.memory in the document

2019-02-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769007#comment-16769007
 ] 

Hudson commented on YARN-9284:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15966 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15966/])
YARN-9284. Fix the unit of yarn.service.am-resource.memory in the (aajisaka: 
rev 3a39d9a2d28943316dd32196bba10ef326218d6e)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/Configurations.md


> Fix the unit of yarn.service.am-resource.memory in the document
> ---
>
> Key: YARN-9284
> URL: https://issues.apache.org/jira/browse/YARN-9284
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, yarn-native-services
>Reporter: Masahiro Tanaka
>Assignee: Masahiro Tanaka
>Priority: Minor
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: YARN-9284.001.patch
>
>
> In the [YARN Services configuration 
> document|https://hadoop.apache.org/docs/r3.2.0/hadoop-yarn/hadoop-yarn-site/yarn-service/Configurations.html],
>  the description of {{yarn.service.am-resource.memory}} says, 
>  bq. Memory size in GB for the service AM (default 1024).
> which should be {{MB}} as this is used 
> [here|https://github.com/apache/hadoop/blob/49ddd8a6ed5b40d12defb0771b4c8b53d4ffde3f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java#L1008]
>  and {{Resource.newInstance}} creates {{LightWeightResource}}. The unit for 
> memory is megabytes as described in 
> [javadocs|https://github.com/apache/hadoop/blob/49ddd8a6ed5b40d12defb0771b4c8b53d4ffde3f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/LightWeightResource.java#L44]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9284) Fix the unit of yarn.service.am-resource.memory in the document

2019-02-14 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-9284:

   Priority: Minor  (was: Major)
Component/s: documentation

> Fix the unit of yarn.service.am-resource.memory in the document
> ---
>
> Key: YARN-9284
> URL: https://issues.apache.org/jira/browse/YARN-9284
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masahiro Tanaka
>Assignee: Masahiro Tanaka
>Priority: Minor
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: YARN-9284.001.patch
>
>
> In the [YARN Services configuration 
> document|https://hadoop.apache.org/docs/r3.2.0/hadoop-yarn/hadoop-yarn-site/yarn-service/Configurations.html],
>  the description of {{yarn.service.am-resource.memory}} says, 
>  bq. Memory size in GB for the service AM (default 1024).
> which should be {{MB}} as this is used 
> [here|https://github.com/apache/hadoop/blob/49ddd8a6ed5b40d12defb0771b4c8b53d4ffde3f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java#L1008]
>  and {{Resource.newInstance}} creates {{LightWeightResource}}. The unit for 
> memory is megabytes as described in 
> [javadocs|https://github.com/apache/hadoop/blob/49ddd8a6ed5b40d12defb0771b4c8b53d4ffde3f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/LightWeightResource.java#L44]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9284) Fix the unit of yarn.service.am-resource.memory in the document

2019-02-14 Thread Masahiro Tanaka (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769001#comment-16769001
 ] 

Masahiro Tanaka commented on YARN-9284:
---

Thanks [~ajisakaa] for reviewing & committing this!

> Fix the unit of yarn.service.am-resource.memory in the document
> ---
>
> Key: YARN-9284
> URL: https://issues.apache.org/jira/browse/YARN-9284
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, yarn-native-services
>Reporter: Masahiro Tanaka
>Assignee: Masahiro Tanaka
>Priority: Minor
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: YARN-9284.001.patch
>
>
> In the [YARN Services configuration 
> document|https://hadoop.apache.org/docs/r3.2.0/hadoop-yarn/hadoop-yarn-site/yarn-service/Configurations.html],
>  the description of {{yarn.service.am-resource.memory}} says, 
>  bq. Memory size in GB for the service AM (default 1024).
> which should be {{MB}} as this is used 
> [here|https://github.com/apache/hadoop/blob/49ddd8a6ed5b40d12defb0771b4c8b53d4ffde3f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java#L1008]
>  and {{Resource.newInstance}} creates {{LightWeightResource}}. The unit for 
> memory is megabytes as described in 
> [javadocs|https://github.com/apache/hadoop/blob/49ddd8a6ed5b40d12defb0771b4c8b53d4ffde3f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/LightWeightResource.java#L44]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9284) Fix the unit of yarn.service.am-resource.memory in the document

2019-02-14 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-9284:

Component/s: yarn-native-services

> Fix the unit of yarn.service.am-resource.memory in the document
> ---
>
> Key: YARN-9284
> URL: https://issues.apache.org/jira/browse/YARN-9284
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, yarn-native-services
>Reporter: Masahiro Tanaka
>Assignee: Masahiro Tanaka
>Priority: Minor
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: YARN-9284.001.patch
>
>
> In the [YARN Services configuration 
> document|https://hadoop.apache.org/docs/r3.2.0/hadoop-yarn/hadoop-yarn-site/yarn-service/Configurations.html],
>  the description of {{yarn.service.am-resource.memory}} says, 
>  bq. Memory size in GB for the service AM (default 1024).
> which should be {{MB}} as this is used 
> [here|https://github.com/apache/hadoop/blob/49ddd8a6ed5b40d12defb0771b4c8b53d4ffde3f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java#L1008]
>  and {{Resource.newInstance}} creates {{LightWeightResource}}. The unit for 
> memory is megabytes as described in 
> [javadocs|https://github.com/apache/hadoop/blob/49ddd8a6ed5b40d12defb0771b4c8b53d4ffde3f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/LightWeightResource.java#L44]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9284) Fix the unit of yarn.service.am-resource.memory in the document

2019-02-14 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-9284:

Summary: Fix the unit of yarn.service.am-resource.memory in the document  
(was: Fix yarn.service.am-resource.memory default memory size in the document)

> Fix the unit of yarn.service.am-resource.memory in the document
> ---
>
> Key: YARN-9284
> URL: https://issues.apache.org/jira/browse/YARN-9284
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Masahiro Tanaka
>Assignee: Masahiro Tanaka
>Priority: Major
> Attachments: YARN-9284.001.patch
>
>
> In the [YARN Services configuration 
> document|https://hadoop.apache.org/docs/r3.2.0/hadoop-yarn/hadoop-yarn-site/yarn-service/Configurations.html],
>  the description of {{yarn.service.am-resource.memory}} says, 
>  bq. Memory size in GB for the service AM (default 1024).
> which should be {{MB}} as this is used 
> [here|https://github.com/apache/hadoop/blob/49ddd8a6ed5b40d12defb0771b4c8b53d4ffde3f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java#L1008]
>  and {{Resource.newInstance}} creates {{LightWeightResource}}. The unit for 
> memory is megabytes as described in 
> [javadocs|https://github.com/apache/hadoop/blob/49ddd8a6ed5b40d12defb0771b4c8b53d4ffde3f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/LightWeightResource.java#L44]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9213) RM Web UI v1 does not show custom resource allocations for containers page

2019-02-14 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768987#comment-16768987
 ] 

Szilard Nemeth commented on YARN-9213:
--

Hi [~sunilg] ! 
Rhis kind of result I received only after I uploaded the screenshots. Will 
check soo  if this patch still applies to trunk well

> RM Web UI v1 does not show custom resource allocations for containers page
> --
>
> Key: YARN-9213
> URL: https://issues.apache.org/jira/browse/YARN-9213
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: Screen Shot 2019-02-08 at 21.16.37-before.png, Screen 
> Shot 2019-02-09 at 9.55.16-after.png, YARN-9213.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9284) Fix yarn.service.am-resource.memory default memory size in the document

2019-02-14 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768986#comment-16768986
 ] 

Akira Ajisaka commented on YARN-9284:
-

+1

> Fix yarn.service.am-resource.memory default memory size in the document
> ---
>
> Key: YARN-9284
> URL: https://issues.apache.org/jira/browse/YARN-9284
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Masahiro Tanaka
>Assignee: Masahiro Tanaka
>Priority: Major
> Attachments: YARN-9284.001.patch
>
>
> In the [YARN Services configuration 
> document|https://hadoop.apache.org/docs/r3.2.0/hadoop-yarn/hadoop-yarn-site/yarn-service/Configurations.html],
>  the description of {{yarn.service.am-resource.memory}} says, 
>  bq. Memory size in GB for the service AM (default 1024).
> which should be {{MB}} as this is used 
> [here|https://github.com/apache/hadoop/blob/49ddd8a6ed5b40d12defb0771b4c8b53d4ffde3f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java#L1008]
>  and {{Resource.newInstance}} creates {{LightWeightResource}}. The unit for 
> memory is megabytes as described in 
> [javadocs|https://github.com/apache/hadoop/blob/49ddd8a6ed5b40d12defb0771b4c8b53d4ffde3f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/LightWeightResource.java#L44]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8404) Timeline event publish need to be async to avoid Dispatcher thread leak in case ATS is down

2019-02-14 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-8404:

Labels: atsv2  (was: )

> Timeline event publish need to be async to avoid Dispatcher thread leak in 
> case ATS is down
> ---
>
> Key: YARN-8404
> URL: https://issues.apache.org/jira/browse/YARN-8404
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.2
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
>  Labels: atsv2
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: YARN-8404.01.patch
>
>
> It is observed that if ATS1/1.5 daemon is not running, RM recovery is delayed 
> as long as timeline client get timed out for each applications. By default, 
> timed out will take around 5 mins. If completed applications are more then 
> amount of time RM will wait is *(number of completed applications in a 
> cluster * 5 minutes)* which is kind of hanged. 
> Primary reason for this behavior is YARN-3044 YARN-4129 which refactor 
> existing system metric publisher. This refactoring made appFinished event as 
> synchronous which was asynchronous earlier. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8455) Add basic ACL check for all ATS v2 REST APIs

2019-02-14 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-8455:

Labels: atsv2  (was: )

> Add basic ACL check for all ATS v2 REST APIs
> 
>
> Key: YARN-8455
> URL: https://issues.apache.org/jira/browse/YARN-8455
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
>  Labels: atsv2
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8455.001.patch, YARN-8455.002.patch, 
> YARN-8455.003.patch, YARN-8455.004.patch
>
>
> YARN-8319 filter check for flows pages. The same behavior need to be added 
> for all other REST API as long as ATS provides support for ACLs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9309) Improvise graphs in SLS as values displayed in graph are overlapping

2019-02-14 Thread Bilwa S T (JIRA)
Bilwa S T created YARN-9309:
---

 Summary: Improvise graphs in SLS as values displayed in graph are 
overlapping
 Key: YARN-9309
 URL: https://issues.apache.org/jira/browse/YARN-9309
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Bilwa S T
Assignee: Bilwa S T






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8512) ATSv2 entities are not published to HBase from second attempt onwards

2019-02-14 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-8512:

Labels: atsv2  (was: )

> ATSv2 entities are not published to HBase from second attempt onwards
> -
>
> Key: YARN-8512
> URL: https://issues.apache.org/jira/browse/YARN-8512
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 2.10.0, 3.2.0, 3.0.3
>Reporter: Yesha Vora
>Assignee: Rohith Sharma K S
>Priority: Major
>  Labels: atsv2
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8512.01.patch, YARN-8512.02.patch, 
> YARN-8512.03.patch
>
>
> It is observed that if 1st attempt master container is died and 2nd attempt 
> master container is launched in a NM where old containers are running but not 
> master container. 
> ||Attempt||NM1||NM2||Action||
> |attempt-1|master container i.e container-1-1|container-1-2|master container 
> died|
> |attempt-2|NA|container-1-2 and master container container-2-1|NA|
> In the above scenario, NM doesn't identifies flowContext and will get log 
> below
> {noformat}
> 2018-07-10 00:44:38,285 WARN  storage.HBaseTimelineWriterImpl 
> (HBaseTimelineWriterImpl.java:write(170)) - Found null for one of: 
> flowName=null appId=application_1531175172425_0001 userId=hbase 
> clusterId=yarn-cluster . Not proceeding with writing to hbase
> 2018-07-10 00:44:38,560 WARN  storage.HBaseTimelineWriterImpl 
> (HBaseTimelineWriterImpl.java:write(170)) - Found null for one of: 
> flowName=null appId=application_1531175172425_0001 userId=hbase 
> clusterId=yarn-cluster . Not proceeding with writing to hbase
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8630) ATSv2 REST APIs should honor filter-entity-list-by-user in non-secure cluster when ACls are enabled

2019-02-14 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-8630:

Labels: atsv2  (was: )

> ATSv2 REST APIs should honor filter-entity-list-by-user in non-secure cluster 
> when ACls are enabled
> ---
>
> Key: YARN-8630
> URL: https://issues.apache.org/jira/browse/YARN-8630
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
>  Labels: atsv2
> Fix For: 3.2.0, 3.1.2
>
> Attachments: YARN-8630.01.patch
>
>
> It is observed that ATSv2 REST endpoints are not honoring 
> *yarn.webapp.filter-entity-list-by-user* in non-secure cluster when ACLs are 
> enabled. 
> The issue can be seen if static web app filter is not configured in  
> non-secure cluster.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9308) fairscheduler-statedump.log gets generated regardless of service again after the merge of HDFS-7240

2019-02-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768973#comment-16768973
 ] 

Hudson commented on YARN-9308:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15963 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15963/])
YARN-9308. fairscheduler-statedump.log gets generated regardless of (aajisaka: 
rev dabfeab7854aab9b1eacf05bca954f2cf4e5ab89)
* (edit) hadoop-common-project/hadoop-common/src/main/conf/log4j.properties


> fairscheduler-statedump.log gets generated regardless of service again after 
> the merge of HDFS-7240
> ---
>
> Key: YARN-9308
> URL: https://issues.apache.org/jira/browse/YARN-9308
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, scheduler
>Affects Versions: 3.2.0
>Reporter: Akira Ajisaka
>Assignee: Wilfred Spiegelenburg
>Priority: Blocker
> Fix For: 3.3.0, 3.2.1
>
> Attachments: YARN-9308.001.patch
>
>
> After the merge of HDFS-7240, YARN-6453 occurred again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9213) RM Web UI v1 does not show custom resource allocations for containers page

2019-02-14 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768971#comment-16768971
 ] 

Sunil Govindan commented on YARN-9213:
--

Jenkins result seems a bit old. Kicking jenkins again.

> RM Web UI v1 does not show custom resource allocations for containers page
> --
>
> Key: YARN-9213
> URL: https://issues.apache.org/jira/browse/YARN-9213
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: Screen Shot 2019-02-08 at 21.16.37-before.png, Screen 
> Shot 2019-02-09 at 9.55.16-after.png, YARN-9213.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9213) RM Web UI v1 does not show custom resource allocations for containers page

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768970#comment-16768970
 ] 

Hadoop QA commented on YARN-9213:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-9213 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-9213 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/23412/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RM Web UI v1 does not show custom resource allocations for containers page
> --
>
> Key: YARN-9213
> URL: https://issues.apache.org/jira/browse/YARN-9213
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: Screen Shot 2019-02-08 at 21.16.37-before.png, Screen 
> Shot 2019-02-09 at 9.55.16-after.png, YARN-9213.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8486) yarn.webapp.filter-entity-list-by-user should honor limit filter for TS reader flows api

2019-02-14 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-8486:

Labels: atsv2  (was: )

> yarn.webapp.filter-entity-list-by-user should honor limit filter for TS 
> reader flows api
> 
>
> Key: YARN-8486
> URL: https://issues.apache.org/jira/browse/YARN-8486
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Charan Hebri
>Assignee: Rohith Sharma K S
>Priority: Major
>  Labels: atsv2
>
> Post YARN-8319, flows restrict entities per user.  If limit is applied to the 
> flows then returned values are inconsistent. Reason is if back end returned 
> values are 10 and contains no data for user1, then flows api returns empty. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8492) ATSv2 HBase tests are failing with ClassNotFoundException

2019-02-14 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-8492:

Labels: atsv2 test  (was: test)

> ATSv2 HBase tests are failing with ClassNotFoundException
> -
>
> Key: YARN-8492
> URL: https://issues.apache.org/jira/browse/YARN-8492
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
>  Labels: atsv2, test
> Fix For: 3.2.0
>
> Attachments: YARN-8492.01.patch, YARN-8492.02.patch
>
>
> It is seen in recent QA report that ATSv2 Hbase tests are failing with 
> ClassNotFoundException.
> This looks to be regression from hadoop common patch or any other patch. We 
> need to figure out which JIRA broke this and fix tests failure.
>  hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun
>       
> hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema
>       
> hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities
>       hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps
>       hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown
>       
> hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
>       
> hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain
>       
> hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
>       
> hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity
>  
> {noformat}
> ERROR] 
> org.apache.hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps
>   Time elapsed: 0.102 s  <<< ERROR!
> java.lang.NoClassDefFoundError: 
> org/apache/hadoop/crypto/key/KeyProviderTokenIssuer
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps.setupBeforeClass(TestHBaseTimelineStorageApps.java:97)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.crypto.key.KeyProviderTokenIssuer
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8591) [ATSv2] NPE while checking for entity acl in non-secure cluster

2019-02-14 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-8591:

Labels: atsv2  (was: )

> [ATSv2] NPE while checking for entity acl in non-secure cluster
> ---
>
> Key: YARN-8591
> URL: https://issues.apache.org/jira/browse/YARN-8591
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelinereader, timelineserver
>Reporter: Akhil PB
>Assignee: Rohith Sharma K S
>Priority: Major
>  Labels: atsv2
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8591.01.patch
>
>
> {code:java}
> GET 
> http://ctr-e138-1518143905142-417433-01-04.hwx.site:8198/ws/v2/timeline/apps/application_1532578985272_0002/entities/YARN_CONTAINER?fields=ALL&_=1532670071899{code}
> {code:java}
> 2018-07-27 05:32:03,468 WARN  webapp.GenericExceptionHandler 
> (GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
> javax.ws.rs.WebApplicationException: java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.handleException(TimelineReaderWebServices.java:196)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getEntities(TimelineReaderWebServices.java:624)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getEntities(TimelineReaderWebServices.java:474)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.security.TimelineReaderWhitelistAuthorizationFilter.doFilter(TimelineReaderWhitelistAuthorizationFilter.java:85)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.security.http.CrossOriginFilter.doFilter(CrossOriginFilter.java:98)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1604)
> at 
> 

[jira] [Updated] (YARN-8950) Compilation fails with dependency convergence error for hbase.profile=2.0

2019-02-14 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-8950:

Labels: atsv2  (was: )

> Compilation fails with dependency convergence error for hbase.profile=2.0
> -
>
> Key: YARN-8950
> URL: https://issues.apache.org/jira/browse/YARN-8950
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.3.0
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
>  Labels: atsv2
> Fix For: 3.2.0, 3.1.2, 3.3.0
>
> Attachments: YARN-8950.01.patch, YARN-8950.01.patch, 
> YARN-8950.01.patch, with-patch-compile-pass.out, 
> without-patch-compile-fail.out
>
>
> Dependency check for hbase-client package fails when source code is compiled 
> with *-Dhbase.profile=2.0*
> {noformat}
> [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (depcheck) @ 
> hadoop-yarn-server-timelineservice-hbase-client ---
> [WARNING]
> Dependency convergence error for 
> org.eclipse.jetty:jetty-http:9.3.24.v20180605 paths to dependency are:
> +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-client:3.3.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.3.0-SNAPSHOT
> +-org.eclipse.jetty:jetty-server:9.3.24.v20180605
>   +-org.eclipse.jetty:jetty-http:9.3.24.v20180605
> and
> +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-client:3.3.0-SNAPSHOT
>   +-org.apache.hbase:hbase-server:2.0.0-beta-1
> +-org.apache.hbase:hbase-http:2.0.0-beta-1
>   +-org.eclipse.jetty:jetty-http:9.3.19.v20170502
> [WARNING]
> Dependency convergence error for 
> org.eclipse.jetty:jetty-security:9.3.24.v20180605 paths to dependency are:
> +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-client:3.3.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.3.0-SNAPSHOT
> +-org.eclipse.jetty:jetty-servlet:9.3.24.v20180605
>   +-org.eclipse.jetty:jetty-security:9.3.24.v20180605
> and
> +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-client:3.3.0-SNAPSHOT
>   +-org.apache.hbase:hbase-server:2.0.0-beta-1
> +-org.apache.hbase:hbase-http:2.0.0-beta-1
>   +-org.eclipse.jetty:jetty-security:9.3.19.v20170502
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability. See above detailed error message.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9034) ApplicationCLI should have option to take clusterId

2019-02-14 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-9034:

Labels: atsv2  (was: )

> ApplicationCLI should have option to take clusterId
> ---
>
> Key: YARN-9034
> URL: https://issues.apache.org/jira/browse/YARN-9034
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
>  Labels: atsv2
> Fix For: 3.3.0
>
> Attachments: YARN-9034.01.patch, YARN-9034.02.patch, 
> YARN-9034.03.patch, YARN-9034.04.patch
>
>
> Post YARN-8303, LogsCLI provide an option to input clusterid which could be 
> used for fetching data from atsv2.  ApplicationCLI also should have this 
> option.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9251) Build failure for -Dhbase.profile=2.0

2019-02-14 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-9251:

Labels: atsv2  (was: )

> Build failure for -Dhbase.profile=2.0
> -
>
> Key: YARN-9251
> URL: https://issues.apache.org/jira/browse/YARN-9251
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
>  Labels: atsv2
> Fix For: 3.3.0
>
> Attachments: HADOOP-16088.01.patch
>
>
> Post HADOOP-14178, hadoop build failure due to incorrect pom.xml. 
> {noformat}
> HW12723:hadoop rsharmaks$ mvn clean install -DskipTests -DskipShade 
> -Dhbase.profile=2.0
> [INFO] Scanning for projects...
> [ERROR] [ERROR] Some problems were encountered while processing the POMs:
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar is 
> missing. @ line 485, column 21
>  @
> [ERROR] The build could not read 1 project -> [Help 1]
> [ERROR]
> [ERROR]   The project 
> org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-tests:3.3.0-SNAPSHOT
>  
> (/Users/rsharmaks/Repos/Apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml)
>  has 1 error
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar 
> is missing. @ line 485, column 21
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
> {noformat}
> cc:/ [~ajisakaa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9044) LogsCLI should contact ATSv2 for "-am" option

2019-02-14 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-9044:

Labels: atsv2  (was: )

> LogsCLI should contact ATSv2 for "-am" option
> -
>
> Key: YARN-9044
> URL: https://issues.apache.org/jira/browse/YARN-9044
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
>  Labels: atsv2
> Fix For: 3.3.0
>
> Attachments: YARN-9044.01.patch, YARN-9044.01.patch, 
> YARN-9044.02.patch
>
>
> *yarn logs -applicationId appId -am 1* contact ATS1.5 even though it is not 
> configured. Rather LogsCLI should contact ATSv2 for AM container info. 
> Alternative to above one can use *yarn logs -containerId * 
> to fetch logs. But -am option should also work along with ATSv2.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9242) Revert YARN-8270 from branch-3.1 and branch-3.1.2

2019-02-14 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-9242:

Labels: atsv2  (was: )

> Revert YARN-8270 from branch-3.1 and branch-3.1.2
> -
>
> Key: YARN-9242
> URL: https://issues.apache.org/jira/browse/YARN-9242
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
>  Labels: atsv2
>
> It is observed that in hadoop-3.1-RC0, NodeManager are unable to initialize 
> TimelineCollectorWebService! 
> Primary reason is HADOOP-15657 is not present in hadoop-3.1 branch! 
> Following error is seen NM logs
> {noformat}
> Caused by: org.apache.hadoop.metrics2.MetricsException: Unsupported metric 
> field putEntitiesFailureLatency of type 
> org.apache.hadoop.metrics2.lib.MutableQuantiles
>   at 
> org.apache.hadoop.metrics2.lib.MutableMetricsFactory.newForField(MutableMetricsFactory.java:87)
> {noformat}
> We need to revert YARN-8270 from branch-3.1!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8834) Provide Java client for fetching Yarn specific entities from TimelineReader

2019-02-14 Thread Vrushali C (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-8834:
-
Labels: atsv2  (was: )

> Provide Java client for fetching Yarn specific entities from TimelineReader
> ---
>
> Key: YARN-8834
> URL: https://issues.apache.org/jira/browse/YARN-8834
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Critical
>  Labels: atsv2
> Fix For: 3.1.2, 3.3.0, 3.2.1
>
> Attachments: YARN-8834.001.patch, YARN-8834.002.patch, 
> YARN-8834.003.patch, YARN-8834.004.patch, YARN-8834.005.patch, 
> YARN-8834.006.patch
>
>
> While reviewing YARN-8303, we felt that it is necessary to provide 
> TimelineReaderClient which wraps all the REST calls in it so that user can 
> just provide application or container ids along with filters.Currently 
> fetching entities from TimelineReader is only via REST call or somebody need 
> to write java client get entities.
> It is good to provide TimelineReaderClient which fetch entities from 
> TimelineReaderServer. This will be more useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9308) fairscheduler-statedump.log gets generated regardless of service again after the merge of HDFS-7240

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768915#comment-16768915
 ] 

Hadoop QA commented on YARN-9308:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
35m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9308 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958811/YARN-9308.001.patch |
| Optional Tests |  dupname  asflicense  mvnsite  unit  |
| uname | Linux 70567ea30cd9 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6c8ffdb |
| maven | version: Apache Maven 3.3.9 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23411/testReport/ |
| Max. process+thread count | 339 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/23411/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> fairscheduler-statedump.log gets generated regardless of service again after 
> the merge of HDFS-7240
> ---
>
> Key: YARN-9308
> URL: https://issues.apache.org/jira/browse/YARN-9308
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, scheduler
>Affects Versions: 3.2.0
>Reporter: Akira Ajisaka
>Assignee: Wilfred Spiegelenburg
>Priority: Blocker
> Attachments: YARN-9308.001.patch
>
>
> After the merge of HDFS-7240, YARN-6453 occurred again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9308) fairscheduler-statedump.log gets generated regardless of service again after the merge of HDFS-7240

2019-02-14 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768901#comment-16768901
 ] 

Akira Ajisaka commented on YARN-9308:
-

LGTM, +1 pending Jenkins.

> fairscheduler-statedump.log gets generated regardless of service again after 
> the merge of HDFS-7240
> ---
>
> Key: YARN-9308
> URL: https://issues.apache.org/jira/browse/YARN-9308
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, scheduler
>Affects Versions: 3.2.0
>Reporter: Akira Ajisaka
>Assignee: Wilfred Spiegelenburg
>Priority: Blocker
> Attachments: YARN-9308.001.patch
>
>
> After the merge of HDFS-7240, YARN-6453 occurred again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9298) Implement FS placement rules using PlacementRule interface

2019-02-14 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768900#comment-16768900
 ] 

Wilfred Spiegelenburg commented on YARN-9298:
-

[~cheersyang] Can you please check this?

> Implement FS placement rules using PlacementRule interface
> --
>
> Key: YARN-9298
> URL: https://issues.apache.org/jira/browse/YARN-9298
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-9298.001.patch
>
>
> Implement existing placement rules of the FS using the PlacementRule 
> interface.
> Preparation for YARN-8967



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9308) fairscheduler-statedump.log gets generated regardless of service again after the merge of HDFS-7240

2019-02-14 Thread Wilfred Spiegelenburg (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg updated YARN-9308:

Attachment: YARN-9308.001.patch

> fairscheduler-statedump.log gets generated regardless of service again after 
> the merge of HDFS-7240
> ---
>
> Key: YARN-9308
> URL: https://issues.apache.org/jira/browse/YARN-9308
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, scheduler
>Affects Versions: 3.2.0
>Reporter: Akira Ajisaka
>Assignee: Wilfred Spiegelenburg
>Priority: Blocker
> Attachments: YARN-9308.001.patch
>
>
> After the merge of HDFS-7240, YARN-6453 occurred again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9308) fairscheduler-statedump.log gets generated regardless of service again after the merge of HDFS-7240

2019-02-14 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768891#comment-16768891
 ] 

Wilfred Spiegelenburg commented on YARN-9308:
-

The changes from [HDFS-7240 git commit 
fixup|https://github.com/apache/hadoop/commit/2adda92de1535c0472c0df33a145fa1814703f4f]
 added the log config lines back without the comment marks
 I will upload a patch to fix it up again.

> fairscheduler-statedump.log gets generated regardless of service again after 
> the merge of HDFS-7240
> ---
>
> Key: YARN-9308
> URL: https://issues.apache.org/jira/browse/YARN-9308
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, scheduler
>Affects Versions: 3.2.0
>Reporter: Akira Ajisaka
>Assignee: Wilfred Spiegelenburg
>Priority: Blocker
>
> After the merge of HDFS-7240, YARN-6453 occurred again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9308) fairscheduler-statedump.log gets generated regardless of service again after the merge of HDFS-7240

2019-02-14 Thread Wilfred Spiegelenburg (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg reassigned YARN-9308:
---

Assignee: Wilfred Spiegelenburg

> fairscheduler-statedump.log gets generated regardless of service again after 
> the merge of HDFS-7240
> ---
>
> Key: YARN-9308
> URL: https://issues.apache.org/jira/browse/YARN-9308
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, scheduler
>Affects Versions: 3.2.0
>Reporter: Akira Ajisaka
>Assignee: Wilfred Spiegelenburg
>Priority: Blocker
>
> After the merge of HDFS-7240, YARN-6453 occurred again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8927) Support trust top-level image like "centos" when "library" is configured in "docker.trusted.registries"

2019-02-14 Thread Zhankun Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768880#comment-16768880
 ] 

Zhankun Tang commented on YARN-8927:


[~eyang] , sorry for the late reply. Just go through the above discussions. It 
makes sense to me that we check local image existence in Java layer since it 
controls pull or not.

One thing I'm not sure if this is different from the Docker version. I cannot 
rename an image name with "/" in its tag in my Ubuntu VM. Docker version is 
"18.06.1-ce". [~ebadger] , can you do this in your environment?
{code:java}
root@master0-VirtualBox:/opt/code/hadoop# docker tag 
tangzhankun/repo1/sub1/tensorflow tensorflow:repo1/sub1
Error parsing reference: "tensorflow:repo1/sub1" is not a valid repository/tag: 
invalid reference format{code}
 

> Support trust top-level image like "centos" when "library" is configured in 
> "docker.trusted.registries"
> ---
>
> Key: YARN-8927
> URL: https://issues.apache.org/jira/browse/YARN-8927
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8927-trunk.001.patch, YARN-8927-trunk.002.patch
>
>
> There are some missing cases that we need to catch when handling 
> "docker.trusted.registries".
> The container-executor.cfg configuration is as follows:
> {code:java}
> docker.trusted.registries=tangzhankun,ubuntu,centos{code}
> It works if run DistrubutedShell with "tangzhankun/tensorflow"
> {code:java}
> "yarn ... -shell_env YARN_CONTAINER_RUNTIME_TYPE=docker -shell_env 
> YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=tangzhankun/tensorflow
> {code}
> But run a DistrubutedShell job with "centos", "centos[:tagName]", "ubuntu" 
> and "ubuntu[:tagName]" fails:
> The error message is like:
> {code:java}
> "image: centos is not trusted"
> {code}
> We need better handling the above cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9308) fairscheduler-statedump.log gets generated regardless of service again after the merge of HDFS-7240

2019-02-14 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created YARN-9308:
---

 Summary: fairscheduler-statedump.log gets generated regardless of 
service again after the merge of HDFS-7240
 Key: YARN-9308
 URL: https://issues.apache.org/jira/browse/YARN-9308
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler, scheduler
Affects Versions: 3.2.0
Reporter: Akira Ajisaka


After the merge of HDFS-7240, YARN-6453 occurred again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9307) node_partitions constraint does not work

2019-02-14 Thread kyungwan nam (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768856#comment-16768856
 ] 

kyungwan nam commented on YARN-9307:


I attached a patch for hadoop-3.1.
It seems this is fixed in hadoop-3.2 by YARN-7863.
but, it need to be fixed for hadoop-3.1 line.

> node_partitions constraint does not work
> 
>
> Key: YARN-9307
> URL: https://issues.apache.org/jira/browse/YARN-9307
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: kyungwan nam
>Priority: Major
> Attachments: YARN-9307.branch-3.1.001.patch
>
>
> when a yarn-service app is submitted with below configuration, 
> node_partitions constraint does not work.
> {code}
> …
>  "placement_policy": {
>"constraints": [
>  {
>"type": "ANTI_AFFINITY",
>"scope": "NODE",
>"target_tags": [
>  "ws"
>],
>"node_partitions": [
>  ""
>]
>  }
>]
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9307) node_partitions constraint does not work

2019-02-14 Thread kyungwan nam (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kyungwan nam updated YARN-9307:
---
Attachment: YARN-9307.branch-3.1.001.patch

> node_partitions constraint does not work
> 
>
> Key: YARN-9307
> URL: https://issues.apache.org/jira/browse/YARN-9307
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: kyungwan nam
>Priority: Major
> Attachments: YARN-9307.branch-3.1.001.patch
>
>
> when a yarn-service app is submitted with below configuration, 
> node_partitions constraint does not work.
> {code}
> …
>  "placement_policy": {
>"constraints": [
>  {
>"type": "ANTI_AFFINITY",
>"scope": "NODE",
>"target_tags": [
>  "ws"
>],
>"node_partitions": [
>  ""
>]
>  }
>]
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9307) node_partitions constraint does not work

2019-02-14 Thread kyungwan nam (JIRA)
kyungwan nam created YARN-9307:
--

 Summary: node_partitions constraint does not work
 Key: YARN-9307
 URL: https://issues.apache.org/jira/browse/YARN-9307
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.1.1
Reporter: kyungwan nam


when a yarn-service app is submitted with below configuration, node_partitions 
constraint does not work.

{code}
…
 "placement_policy": {
   "constraints": [
 {
   "type": "ANTI_AFFINITY",
   "scope": "NODE",
   "target_tags": [
 "ws"
   ],
   "node_partitions": [
 ""
   ]
 }
   ]
 }
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8927) Support trust top-level image like "centos" when "library" is configured in "docker.trusted.registries"

2019-02-14 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768795#comment-16768795
 ] 

Eric Yang commented on YARN-8927:
-

[~ebadger] Sure, I will remove the comment.

> Support trust top-level image like "centos" when "library" is configured in 
> "docker.trusted.registries"
> ---
>
> Key: YARN-8927
> URL: https://issues.apache.org/jira/browse/YARN-8927
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8927-trunk.001.patch, YARN-8927-trunk.002.patch
>
>
> There are some missing cases that we need to catch when handling 
> "docker.trusted.registries".
> The container-executor.cfg configuration is as follows:
> {code:java}
> docker.trusted.registries=tangzhankun,ubuntu,centos{code}
> It works if run DistrubutedShell with "tangzhankun/tensorflow"
> {code:java}
> "yarn ... -shell_env YARN_CONTAINER_RUNTIME_TYPE=docker -shell_env 
> YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=tangzhankun/tensorflow
> {code}
> But run a DistrubutedShell job with "centos", "centos[:tagName]", "ubuntu" 
> and "ubuntu[:tagName]" fails:
> The error message is like:
> {code:java}
> "image: centos is not trusted"
> {code}
> We need better handling the above cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8927) Support trust top-level image like "centos" when "library" is configured in "docker.trusted.registries"

2019-02-14 Thread Eric Badger (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768778#comment-16768778
 ] 

Eric Badger commented on YARN-8927:
---

I'm ok with this going in as is given the addition of YARN-9306.

{noformat}
+// image name doens't contains "/"
{noformat}
[~eyang], could you change fix up this nit to {{doesn't contain}} before the 
commit? 

> Support trust top-level image like "centos" when "library" is configured in 
> "docker.trusted.registries"
> ---
>
> Key: YARN-8927
> URL: https://issues.apache.org/jira/browse/YARN-8927
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8927-trunk.001.patch, YARN-8927-trunk.002.patch
>
>
> There are some missing cases that we need to catch when handling 
> "docker.trusted.registries".
> The container-executor.cfg configuration is as follows:
> {code:java}
> docker.trusted.registries=tangzhankun,ubuntu,centos{code}
> It works if run DistrubutedShell with "tangzhankun/tensorflow"
> {code:java}
> "yarn ... -shell_env YARN_CONTAINER_RUNTIME_TYPE=docker -shell_env 
> YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=tangzhankun/tensorflow
> {code}
> But run a DistrubutedShell job with "centos", "centos[:tagName]", "ubuntu" 
> and "ubuntu[:tagName]" fails:
> The error message is like:
> {code:java}
> "image: centos is not trusted"
> {code}
> We need better handling the above cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6686) Support for adding and removing queue mappings

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768766#comment-16768766
 ] 

Hadoop QA commented on YARN-6686:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} YARN-6686 does not apply to YARN-5734. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6686 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875164/YARN-6686-YARN-5734.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/23410/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Support for adding and removing queue mappings
> --
>
> Key: YARN-6686
> URL: https://issues.apache.org/jira/browse/YARN-6686
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-6686-YARN-5734.001.patch
>
>
> Right now capacity scheduler uses UserGroupMappingPlacementRule to determine 
> queue mappings. This rule stores mappings in 
> {{yarn.scheduler.capacity.queue-mappings}}. For users with a large number of 
> mappings, adding or removing queue mappings becomes infeasible.
> Need to come up with a way to add/remove individual mappings, for any/all 
> different configured placement rules.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6686) Support for adding and removing queue mappings

2019-02-14 Thread Anthony Hsu (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768763#comment-16768763
 ] 

Anthony Hsu commented on YARN-6686:
---

Hi [~csexz],

I like your proposal of using  and . I have one suggestion. Instead 
of having separate  and , I think everything should be 
an  ( to me sounds like it's a read-only request, not 
an update request).
 * If  just contains text, then it's a complete replacement.
 * If  has  and  children, then it's an addition, removal, or 
update

||Operation|| Description/Notes||
|Addition|(empty)|u::,u::,...|Adds the mappings 
u::,u::,... |
|Removal|u::,u::,...|(empty)|Removes the mappings 
u::,u::,...|
|Update|u::,u::,...|u::,u::,...|Updates
 the mappings for users , , ... from , , ... to 
, , ...
 
The  and  should contain the same number of items.|

> Support for adding and removing queue mappings
> --
>
> Key: YARN-6686
> URL: https://issues.apache.org/jira/browse/YARN-6686
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-6686-YARN-5734.001.patch
>
>
> Right now capacity scheduler uses UserGroupMappingPlacementRule to determine 
> queue mappings. This rule stores mappings in 
> {{yarn.scheduler.capacity.queue-mappings}}. For users with a large number of 
> mappings, adding or removing queue mappings becomes infeasible.
> Need to come up with a way to add/remove individual mappings, for any/all 
> different configured placement rules.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8927) Support trust top-level image like "centos" when "library" is configured in "docker.trusted.registries"

2019-02-14 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767790#comment-16767790
 ] 

Eric Yang edited comment on YARN-8927 at 2/14/19 10:11 PM:
---

[~ebadger] I think it's still admin mistake because the repository name can be 
preconfigured to a host in local domain which would have no chance to contact 
docker hub even if a repository is later setup to try to impersonate.  YARN's 
trusted registry acl can avoid untrusted docker hub repository.  The discussion 
is digressing.  I agree that adding the local image white list can tighten 
security further for images without '/' characters or used.  This jira can't 
solve docker run pulling remote image when image is absent or remote image name 
is identical to local image name.  [~csingh] is solving the docker image 
localization issues in YARN-3854.  It may help to solve precheck of image 
existence in her story instead.


was (Author: eyang):
[~ebadger] I think it's still admin mistake because the repository name can be 
preconfigured to a host in local domain which would have no chance to contact 
docker hub even if a repository is later setup to try to impersonate.  YARN's 
trusted registry acl can avoid untrusted docker hub repository.  The discussion 
is digressing.  I agree that adding the local image white list can tighten 
security further for images without '/' characters or used.  This jira can't 
solve docker run pulling remote image when image is absent or remote image name 
is identical to local image name.  [~csingh] is solving the docker image 
localization issues in YARN-9228.  It may help to solve precheck of image 
existence in her story instead.

> Support trust top-level image like "centos" when "library" is configured in 
> "docker.trusted.registries"
> ---
>
> Key: YARN-8927
> URL: https://issues.apache.org/jira/browse/YARN-8927
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8927-trunk.001.patch, YARN-8927-trunk.002.patch
>
>
> There are some missing cases that we need to catch when handling 
> "docker.trusted.registries".
> The container-executor.cfg configuration is as follows:
> {code:java}
> docker.trusted.registries=tangzhankun,ubuntu,centos{code}
> It works if run DistrubutedShell with "tangzhankun/tensorflow"
> {code:java}
> "yarn ... -shell_env YARN_CONTAINER_RUNTIME_TYPE=docker -shell_env 
> YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=tangzhankun/tensorflow
> {code}
> But run a DistrubutedShell job with "centos", "centos[:tagName]", "ubuntu" 
> and "ubuntu[:tagName]" fails:
> The error message is like:
> {code:java}
> "image: centos is not trusted"
> {code}
> We need better handling the above cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8927) Support trust top-level image like "centos" when "library" is configured in "docker.trusted.registries"

2019-02-14 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768760#comment-16768760
 ] 

Eric Yang commented on YARN-8927:
-

Patch 2 uses '/' to determine if the image is a top level image.  It does not 
use '/' character to detect local image.  If admin wants to authorize local 
image, he/she can tag local image with trusted registry prefix.  As long as the 
trusted registry prefix does not have the same name as docker hub registry 
name, authorized local images are safe to use.  If local image is named without 
'/' character, they are also allowed for now until YARN-9306 is addressed.  It 
would take admin rights to tag local image without '/' character.  The 
possibility of using library keyword to trigger unauthorized image to run is 
hard to accomplish.  Patch 2 is good enough for me.  +1 for patch 2.  I will 
commit patch 2 if no objection.

> Support trust top-level image like "centos" when "library" is configured in 
> "docker.trusted.registries"
> ---
>
> Key: YARN-8927
> URL: https://issues.apache.org/jira/browse/YARN-8927
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8927-trunk.001.patch, YARN-8927-trunk.002.patch
>
>
> There are some missing cases that we need to catch when handling 
> "docker.trusted.registries".
> The container-executor.cfg configuration is as follows:
> {code:java}
> docker.trusted.registries=tangzhankun,ubuntu,centos{code}
> It works if run DistrubutedShell with "tangzhankun/tensorflow"
> {code:java}
> "yarn ... -shell_env YARN_CONTAINER_RUNTIME_TYPE=docker -shell_env 
> YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=tangzhankun/tensorflow
> {code}
> But run a DistrubutedShell job with "centos", "centos[:tagName]", "ubuntu" 
> and "ubuntu[:tagName]" fails:
> The error message is like:
> {code:java}
> "image: centos is not trusted"
> {code}
> We need better handling the above cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9306) Detect docker image existence during container launch

2019-02-14 Thread Eric Yang (JIRA)
Eric Yang created YARN-9306:
---

 Summary: Detect docker image existence during container launch
 Key: YARN-9306
 URL: https://issues.apache.org/jira/browse/YARN-9306
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Eric Yang


It would be good to check yarn.nodemanager.runtime.linux.docker.image-update 
flag.  When the flag is false, and docker image doesn't exist in docker cache.  
Container launch should abort, and try on another node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9266) Various fixes are needed in IntelFpgaOpenclPlugin

2019-02-14 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768648#comment-16768648
 ] 

Peter Bacsko commented on YARN-9266:


"Some of the refactors that you performed are not included in the description 
of the jira. Could you update it? I'm thinking of for example moving the parser 
to separate file and fixing checkstyles."
I added checkstyle. Moving the parser is mentioned, albeit not explicitly 
("parseDiagnoseInfo() is too heavyweight – it should be in its own class for 
better testability")

"Actually fixing checkstyles in general should be avoided as it's making the 
backports harder and making the git history dirtier. If you could static import 
the assertTrue/False/Equals functions, replace the 
assert.AssertTrue/False/Equals ones and ALSO fixing checkstyles, I can get away 
with that."
This is generally true, however, FPGA is a bit different IMO.
1. It's still considered beta, not really used by anyone
2. The code is relatively new
3. Changes are very isolated
4. It will likely be deprecated by the pluggable device framework (not sure 
when)

Having said that, I can revert the checkstyle changes. I'd rather keep it and 
focus on static importing the asserts though.

"Can we move the comments in function preStart to a javadoc?"
Let's do this in YARN-9267.

"Wildcard imports"
This might be another personal preference thing, but I just don't like it. I 
try eliminate them every time I see one :D

AoclDiagnosticOutputParser.java --> mostly valid comments (I didn't touch the 
parsing logic on purpose, but these are small changes)

IntelFpgaOpenclPlugin.java
1. Another "*" to eliminate :)
2. "Do we need the conf Configuration object of AbstractFpgaVendorPlugin at 
all?" - It's needed in the implementation of {{initPlugin()}}. But 
{{setConf()}} / {{getConf()}} can go.
3. Exception object -> yep, let's log into the console
4. msg variable -> agree, unnecessary

I'll do these changes tomorrow.



> Various fixes are needed in IntelFpgaOpenclPlugin
> -
>
> Key: YARN-9266
> URL: https://issues.apache.org/jira/browse/YARN-9266
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9266-001.patch, YARN-9266-002.patch, 
> YARN-9266-003.patch, YARN-9266-004.patch
>
>
> Problems identified in this class:
>  * {{InnerShellExecutor}} ignores the timeout parameter
>  * {{configureIP()}} uses printStackTrace() instead of logging
>  * {{configureIP()}} does not log the output of aocl if the exit code != 0
>  * {{parseDiagnoseInfo()}} is too heavyweight – it should be in its own class 
> for better testability
>  * {{downloadIP()}} uses {{contains()}} for file name check – this can really 
> surprise users in some cases (eg. you want to use hello.aocx but hello2.aocx 
> also matches)
>  * method name {{downloadIP()}} is misleading – it actually tries to finds 
> the file. Everything is downloaded (localized) at this point.
>  * {{@VisibleForTesting}} methods should be package private
>  * {{aliasMap}} is not needed - store the acl number in the {{FpgaDevice}} 
> class
>  * checkstyle fixes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8295) [UI2] The "Resource Usage" tab is pointless for finished applications

2019-02-14 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768589#comment-16768589
 ] 

Sunil Govindan commented on YARN-8295:
--

+1

I will commit tomo if there are no objections.

> [UI2] The "Resource Usage" tab is pointless for finished applications
> -
>
> Key: YARN-8295
> URL: https://issues.apache.org/jira/browse/YARN-8295
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Reporter: Gergely Novák
>Assignee: Charan Hebri
>Priority: Minor
> Attachments: YARN-8295.001.patch
>
>
> If the user goes to Applications -> app -> Resource Usage for a finished 
> application, they get this message: "No resource usage data is available for 
> this application!". 
> I think it would be better to hide this tab for finished applications, or at 
> least add something like "this application is not using any resources because 
> it is finished" to the message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9098) Separate mtab file reader code and cgroups file system hierarchy parser code from CGroupsHandlerImpl and ResourceHandlerModule

2019-02-14 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768590#comment-16768590
 ] 

Peter Bacsko commented on YARN-9098:


After some discussion with Szilard, now I can +1 this (non-binding).

> Separate mtab file reader code and cgroups file system hierarchy parser code 
> from CGroupsHandlerImpl and ResourceHandlerModule
> --
>
> Key: YARN-9098
> URL: https://issues.apache.org/jira/browse/YARN-9098
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9098.002.patch, YARN-9098.003.patch, 
> YARN-9098.004.patch, YARN-9098.005.patch, YARN-9098.006.patch, 
> YARN-9098.007.patch
>
>
> Separate mtab file reader code and cgroups file system hierarchy parser code 
> from CGroupsHandlerImpl and ResourceHandlerModule
> CGroupsHandlerImpl has a method parseMtab that parses an mtab file and stores 
> cgroups data.
> CGroupsLCEResourcesHandler also has a method with the same name, with 
> identical code.
> The parser code should be extracted from these places and be added in a new 
> class as this is a separate responsibility.
> As the output of the file parser is a Map>, it's better 
> to encapsulate it in a domain object, named 'CGroupsMountConfig' for instance.
> ResourceHandlerModule has a method named parseConfiguredCGroupPath, that is 
> responsible for producing the same results (Map>) to 
> store cgroups data, it does not operate on mtab file, but looking at the 
> filesystem for cgroup settings. As the output is the same, CGroupsMountConfig 
> should be used here, too.
> Again, this could should not be part of ResourceHandlerModule as it is a 
> different responsibility.
> One more thing which is strongly related to the methods above is 
> CGroupsHandlerImpl.initializeFromMountConfig: This method processes the 
> result of a parsed mtab file or a parsed cgroups filesystem data and stores 
> file system paths for all available controllers. This method invokes 
> findControllerPathInMountConfig, which is a duplicated in CGroupsHandlerImpl 
> and CGroupsLCEResourcesHandler, so it should be moved to a single place. To 
> store filesystem path and controller mappings, a new domain object could be 
> introduced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9295) [UI2] Fix label typo in Cluster Overview page

2019-02-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768577#comment-16768577
 ] 

Hudson commented on YARN-9295:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15959 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15959/])
YARN-9295. [UI2] Fix label typo in Cluster Overview page. Contributed by 
(bibinchundatt: rev b66d5ae9e26447113b146be0ffd81ed7d663a778)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/cluster-metric.js


> [UI2] Fix label typo in Cluster Overview page
> -
>
> Key: YARN-9295
> URL: https://issues.apache.org/jira/browse/YARN-9295
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Charan Hebri
>Assignee: Charan Hebri
>Priority: Trivial
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: Decommissioned-typo.png, YARN-9295.001.patch
>
>
> Change label text from 'Decomissioned' to 'Decommissioned' in Node Managers 
> section of the Cluster Overview page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9293) Optimize MockAMLauncher event handling

2019-02-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768559#comment-16768559
 ] 

Hudson commented on YARN-9293:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15958 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15958/])
YARN-9293. Optimize MockAMLauncher event handling. Contributed by Bibin 
(bibinchundatt: rev 134ae8fc8045e2ae1ed7ca54df95f14ffc863d09)
* (edit) 
hadoop-tools/hadoop-sls/src/test/java/org/apache/hadoop/yarn/sls/appmaster/TestAMSimulator.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/appmaster/MRAMSimulator.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/appmaster/AMSimulator.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/appmaster/StreamAMSimulator.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/resourcemanager/MockAMLauncher.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java


> Optimize MockAMLauncher event handling
> --
>
> Key: YARN-9293
> URL: https://issues.apache.org/jira/browse/YARN-9293
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
>  Labels: simulator
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: YARN-9293-branch-3.1.003.patch, YARN-9293.001.patch, 
> YARN-9293.002.patch, YARN-9293.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9295) [UI2] Fix label typo in Cluster Overview page

2019-02-14 Thread Bibin A Chundatt (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-9295:
---
Summary: [UI2] Fix label typo in Cluster Overview page  (was: [UI2] Fix 
'Decomissioned' label typo in Cluster Overview page)

> [UI2] Fix label typo in Cluster Overview page
> -
>
> Key: YARN-9295
> URL: https://issues.apache.org/jira/browse/YARN-9295
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Charan Hebri
>Assignee: Charan Hebri
>Priority: Trivial
> Attachments: Decommissioned-typo.png, YARN-9295.001.patch
>
>
> Change label text from 'Decomissioned' to 'Decommissioned' in Node Managers 
> section of the Cluster Overview page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9118) Handle issues with parsing user defined GPU devices in GpuDiscoverer

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768548#comment-16768548
 ] 

Hadoop QA commented on YARN-9118:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 3 new + 10 unchanged - 5 fixed = 13 total (was 15) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
43s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9118 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958741/YARN-9118.009.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9743ea1aeee9 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 080a421 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/23409/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23409/testReport/ |
| Max. process+thread count | 306 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 

[jira] [Commented] (YARN-9098) Separate mtab file reader code and cgroups file system hierarchy parser code from CGroupsHandlerImpl and ResourceHandlerModule

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768515#comment-16768515
 ] 

Hadoop QA commented on YARN-9098:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
20s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9098 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958739/YARN-9098.007.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 30b0d747527d 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7a57974 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23408/testReport/ |
| Max. process+thread count | 445 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/23408/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Separate mtab file reader code and cgroups file system 

[jira] [Commented] (YARN-9213) RM Web UI v1 does not show custom resource allocations for containers page

2019-02-14 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768503#comment-16768503
 ] 

Sunil Govindan commented on YARN-9213:
--

Thanks [~snemeth]

This looks good to go now. Will commit tomorrow if no other objections.

> RM Web UI v1 does not show custom resource allocations for containers page
> --
>
> Key: YARN-9213
> URL: https://issues.apache.org/jira/browse/YARN-9213
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: Screen Shot 2019-02-08 at 21.16.37-before.png, Screen 
> Shot 2019-02-09 at 9.55.16-after.png, YARN-9213.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9123) Clean up and split testcases in TestNMWebServices for GPU support

2019-02-14 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768506#comment-16768506
 ] 

Adam Antal commented on YARN-9123:
--

I missed the TestNMWebServices failure by jenkins, but by int-casting it's ok. 
Alternatively you can use {{json.getInt("a")}} instead of {{json.get("a")}}.

> Clean up and split testcases in TestNMWebServices for GPU support
> -
>
> Key: YARN-9123
> URL: https://issues.apache.org/jira/browse/YARN-9123
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-9123.001.patch, YARN-9123.002.patch, 
> YARN-9123.003.patch, YARN-9123.004.patch, YARN-9123.005.patch, 
> YARN-9123.006.patch
>
>
> The following testcases can be cleaned up a bit: 
> TestNMWebServices#testGetNMResourceInfo - Can be split up to 3 different cases
> TestNMWebServices#testGetYarnGpuResourceInfo



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9139) Simplify initializer code of GpuDiscoverer

2019-02-14 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768504#comment-16768504
 ] 

Sunil Govindan commented on YARN-9139:
--

cc [~tangzhankun] could you please take a look

> Simplify initializer code of GpuDiscoverer
> --
>
> Key: YARN-9139
> URL: https://issues.apache.org/jira/browse/YARN-9139
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9139.001.patch, YARN-9139.002.patch, 
> YARN-9139.003.patch, YARN-9139.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8927) Support trust top-level image like "centos" when "library" is configured in "docker.trusted.registries"

2019-02-14 Thread Eric Badger (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768482#comment-16768482
 ] 

Eric Badger commented on YARN-8927:
---

{quote}
Chandni Singh is solving the docker image localization issues in YARN-9228. It 
may help to solve precheck of image existence in her story instead.
{quote}
I'm fine with moving this to another JIRA. I just don't want to preclude an 
environment where only local images are allowed. And doing the determination of 
whether an image is local or not based on the existence of a "/" character 
doesn't do that, since local images are perfectly allowed to contain the "/" 
character in their tag. 

I don't want to hold up this feature either, however. So maybe it's best to 
diverge into 2 different paths here. Keep this JIRA alive to deal only with the 
library keyword and have the library keyword only associated with dockerhub 
images. Then in another JIRA add a different keyword for local images. Because 
using the library keyword for local images in this state would not work out. I 
really don't like the idea of another keyword, since I hate making arbitrary 
special keywords, but I don't see another way around the issue here. 

> Support trust top-level image like "centos" when "library" is configured in 
> "docker.trusted.registries"
> ---
>
> Key: YARN-8927
> URL: https://issues.apache.org/jira/browse/YARN-8927
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8927-trunk.001.patch, YARN-8927-trunk.002.patch
>
>
> There are some missing cases that we need to catch when handling 
> "docker.trusted.registries".
> The container-executor.cfg configuration is as follows:
> {code:java}
> docker.trusted.registries=tangzhankun,ubuntu,centos{code}
> It works if run DistrubutedShell with "tangzhankun/tensorflow"
> {code:java}
> "yarn ... -shell_env YARN_CONTAINER_RUNTIME_TYPE=docker -shell_env 
> YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=tangzhankun/tensorflow
> {code}
> But run a DistrubutedShell job with "centos", "centos[:tagName]", "ubuntu" 
> and "ubuntu[:tagName]" fails:
> The error message is like:
> {code:java}
> "image: centos is not trusted"
> {code}
> We need better handling the above cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9098) Separate mtab file reader code and cgroups file system hierarchy parser code from CGroupsHandlerImpl and ResourceHandlerModule

2019-02-14 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768462#comment-16768462
 ] 

Szilard Nemeth edited comment on YARN-9098 at 2/14/19 4:12 PM:
---

Hi [~pbacsko]!
As the mappings contain the path as key and values as controllers, if 
cpu,cpuset is a directory under /sys/fs/cgroup, the mapping will contain the 
following:
1. cpu --> /sys/fs/cgroup/cpu,cpuacct
2. cpuacct --> /sys/fs/cgroup/cpu,cpuacct
So essentially, cpu and cpuacct will point to the same path. The code that you 
pasted just checks if the controller is contained in the value list. For the 
above example, if the method is invoked with either 'cpu' or 'cpuacct', 
contains will return the same path so I think the code is correct.
I added a more elaborated javadoc comment to the file.


was (Author: snemeth):
As the mappings contain the path as key and values as controllers, if 
cpu,cpuset is a directory under /sys/fs/cgroup, the mapping will contain the 
following:
1. cpu --> /sys/fs/cgroup/cpu,cpuacct
2. cpuacct --> /sys/fs/cgroup/cpu,cpuacct
So essentially, cpu and cpuacct will point to the same path. The code that you 
pasted just checks if the controller is contained in the value list. For the 
above example, if the method is invoked with either 'cpu' or 'cpuacct', 
contains will return the same path so I think the code is correct.
I added a more elaborated javadoc comment to the file.

> Separate mtab file reader code and cgroups file system hierarchy parser code 
> from CGroupsHandlerImpl and ResourceHandlerModule
> --
>
> Key: YARN-9098
> URL: https://issues.apache.org/jira/browse/YARN-9098
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9098.002.patch, YARN-9098.003.patch, 
> YARN-9098.004.patch, YARN-9098.005.patch, YARN-9098.006.patch, 
> YARN-9098.007.patch
>
>
> Separate mtab file reader code and cgroups file system hierarchy parser code 
> from CGroupsHandlerImpl and ResourceHandlerModule
> CGroupsHandlerImpl has a method parseMtab that parses an mtab file and stores 
> cgroups data.
> CGroupsLCEResourcesHandler also has a method with the same name, with 
> identical code.
> The parser code should be extracted from these places and be added in a new 
> class as this is a separate responsibility.
> As the output of the file parser is a Map>, it's better 
> to encapsulate it in a domain object, named 'CGroupsMountConfig' for instance.
> ResourceHandlerModule has a method named parseConfiguredCGroupPath, that is 
> responsible for producing the same results (Map>) to 
> store cgroups data, it does not operate on mtab file, but looking at the 
> filesystem for cgroup settings. As the output is the same, CGroupsMountConfig 
> should be used here, too.
> Again, this could should not be part of ResourceHandlerModule as it is a 
> different responsibility.
> One more thing which is strongly related to the methods above is 
> CGroupsHandlerImpl.initializeFromMountConfig: This method processes the 
> result of a parsed mtab file or a parsed cgroups filesystem data and stores 
> file system paths for all available controllers. This method invokes 
> findControllerPathInMountConfig, which is a duplicated in CGroupsHandlerImpl 
> and CGroupsLCEResourcesHandler, so it should be moved to a single place. To 
> store filesystem path and controller mappings, a new domain object could be 
> introduced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9118) Handle issues with parsing user defined GPU devices in GpuDiscoverer

2019-02-14 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768468#comment-16768468
 ] 

Szilard Nemeth commented on YARN-9118:
--

Hi [~sunilg]!
Added the package info to the latest patch.

> Handle issues with parsing user defined GPU devices in GpuDiscoverer
> 
>
> Key: YARN-9118
> URL: https://issues.apache.org/jira/browse/YARN-9118
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9118.001.patch, YARN-9118.002.patch, 
> YARN-9118.003.patch, YARN-9118.004.patch, YARN-9118.005.patch, 
> YARN-9118.006.patch, YARN-9118.007.patch, YARN-9118.008.patch, 
> YARN-9118.009.patch
>
>
> getGpusUsableByYarn has the following issues: 
> - Duplicate GPU device definitions are not denied: This seems to be the 
> biggest issue as it could increase the number of devices on the node if the 
> device ID is defined 2 or more times.
> - An empty-string is accepted, it works like the user would not want to use 
> auto-discovery and haven't defined any GPU devices: This will result in an 
> empty device list, but the empty-string check is never explicitly there in 
> the code, so this behavior just coincidental.
> - Number validation does not happen on GPU device IDs (separated by commas)
> Many testcases are added as the coverage was already very low.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9118) Handle issues with parsing user defined GPU devices in GpuDiscoverer

2019-02-14 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-9118:
-
Attachment: YARN-9118.009.patch

> Handle issues with parsing user defined GPU devices in GpuDiscoverer
> 
>
> Key: YARN-9118
> URL: https://issues.apache.org/jira/browse/YARN-9118
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9118.001.patch, YARN-9118.002.patch, 
> YARN-9118.003.patch, YARN-9118.004.patch, YARN-9118.005.patch, 
> YARN-9118.006.patch, YARN-9118.007.patch, YARN-9118.008.patch, 
> YARN-9118.009.patch
>
>
> getGpusUsableByYarn has the following issues: 
> - Duplicate GPU device definitions are not denied: This seems to be the 
> biggest issue as it could increase the number of devices on the node if the 
> device ID is defined 2 or more times.
> - An empty-string is accepted, it works like the user would not want to use 
> auto-discovery and haven't defined any GPU devices: This will result in an 
> empty device list, but the empty-string check is never explicitly there in 
> the code, so this behavior just coincidental.
> - Number validation does not happen on GPU device IDs (separated by commas)
> Many testcases are added as the coverage was already very low.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9235) If linux container executor is not set for a GPU cluster GpuResourceHandlerImpl is not initialized and NPE is thrown

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768463#comment-16768463
 ] 

Hadoop QA commented on YARN-9235:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 2 unchanged - 2 fixed = 2 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
47s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9235 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958728/YARN-9235.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2ff208401cac 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7a57974 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23407/testReport/ |
| Max. process+thread count | 306 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 

[jira] [Commented] (YARN-9123) Clean up and split testcases in TestNMWebServices for GPU support

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768459#comment-16768459
 ] 

Hadoop QA commented on YARN-9123:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 5 unchanged - 1 fixed = 6 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
53s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9123 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958729/YARN-9123.006.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9fba4d5955e4 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7a57974 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/23406/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23406/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 

[jira] [Commented] (YARN-9098) Separate mtab file reader code and cgroups file system hierarchy parser code from CGroupsHandlerImpl and ResourceHandlerModule

2019-02-14 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768462#comment-16768462
 ] 

Szilard Nemeth commented on YARN-9098:
--

As the mappings contain the path as key and values as controllers, if 
cpu,cpuset is a directory under /sys/fs/cgroup, the mapping will contain the 
following:
1. cpu --> /sys/fs/cgroup/cpu,cpuacct
2. cpuacct --> /sys/fs/cgroup/cpu,cpuacct
So essentially, cpu and cpuacct will point to the same path. The code that you 
pasted just checks if the controller is contained in the value list. For the 
above example, if the method is invoked with either 'cpu' or 'cpuacct', 
contains will return the same path so I think the code is correct.
I added a more elaborated javadoc comment to the file.

> Separate mtab file reader code and cgroups file system hierarchy parser code 
> from CGroupsHandlerImpl and ResourceHandlerModule
> --
>
> Key: YARN-9098
> URL: https://issues.apache.org/jira/browse/YARN-9098
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9098.002.patch, YARN-9098.003.patch, 
> YARN-9098.004.patch, YARN-9098.005.patch, YARN-9098.006.patch, 
> YARN-9098.007.patch
>
>
> Separate mtab file reader code and cgroups file system hierarchy parser code 
> from CGroupsHandlerImpl and ResourceHandlerModule
> CGroupsHandlerImpl has a method parseMtab that parses an mtab file and stores 
> cgroups data.
> CGroupsLCEResourcesHandler also has a method with the same name, with 
> identical code.
> The parser code should be extracted from these places and be added in a new 
> class as this is a separate responsibility.
> As the output of the file parser is a Map>, it's better 
> to encapsulate it in a domain object, named 'CGroupsMountConfig' for instance.
> ResourceHandlerModule has a method named parseConfiguredCGroupPath, that is 
> responsible for producing the same results (Map>) to 
> store cgroups data, it does not operate on mtab file, but looking at the 
> filesystem for cgroup settings. As the output is the same, CGroupsMountConfig 
> should be used here, too.
> Again, this could should not be part of ResourceHandlerModule as it is a 
> different responsibility.
> One more thing which is strongly related to the methods above is 
> CGroupsHandlerImpl.initializeFromMountConfig: This method processes the 
> result of a parsed mtab file or a parsed cgroups filesystem data and stores 
> file system paths for all available controllers. This method invokes 
> findControllerPathInMountConfig, which is a duplicated in CGroupsHandlerImpl 
> and CGroupsLCEResourcesHandler, so it should be moved to a single place. To 
> store filesystem path and controller mappings, a new domain object could be 
> introduced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9098) Separate mtab file reader code and cgroups file system hierarchy parser code from CGroupsHandlerImpl and ResourceHandlerModule

2019-02-14 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-9098:
-
Attachment: YARN-9098.007.patch

> Separate mtab file reader code and cgroups file system hierarchy parser code 
> from CGroupsHandlerImpl and ResourceHandlerModule
> --
>
> Key: YARN-9098
> URL: https://issues.apache.org/jira/browse/YARN-9098
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9098.002.patch, YARN-9098.003.patch, 
> YARN-9098.004.patch, YARN-9098.005.patch, YARN-9098.006.patch, 
> YARN-9098.007.patch
>
>
> Separate mtab file reader code and cgroups file system hierarchy parser code 
> from CGroupsHandlerImpl and ResourceHandlerModule
> CGroupsHandlerImpl has a method parseMtab that parses an mtab file and stores 
> cgroups data.
> CGroupsLCEResourcesHandler also has a method with the same name, with 
> identical code.
> The parser code should be extracted from these places and be added in a new 
> class as this is a separate responsibility.
> As the output of the file parser is a Map>, it's better 
> to encapsulate it in a domain object, named 'CGroupsMountConfig' for instance.
> ResourceHandlerModule has a method named parseConfiguredCGroupPath, that is 
> responsible for producing the same results (Map>) to 
> store cgroups data, it does not operate on mtab file, but looking at the 
> filesystem for cgroup settings. As the output is the same, CGroupsMountConfig 
> should be used here, too.
> Again, this could should not be part of ResourceHandlerModule as it is a 
> different responsibility.
> One more thing which is strongly related to the methods above is 
> CGroupsHandlerImpl.initializeFromMountConfig: This method processes the 
> result of a parsed mtab file or a parsed cgroups filesystem data and stores 
> file system paths for all available controllers. This method invokes 
> findControllerPathInMountConfig, which is a duplicated in CGroupsHandlerImpl 
> and CGroupsLCEResourcesHandler, so it should be moved to a single place. To 
> store filesystem path and controller mappings, a new domain object could be 
> introduced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7824) [UI2] Yarn Component Instance page should include link to container logs

2019-02-14 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768428#comment-16768428
 ] 

Sunil Govindan commented on YARN-7824:
--

Thanks [~akhilpb] pls share branch-3.2/3.1/3.0 patches as it not applying.

> [UI2] Yarn Component Instance page should include link to container logs
> 
>
> Key: YARN-7824
> URL: https://issues.apache.org/jira/browse/YARN-7824
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0
>Reporter: Yesha Vora
>Assignee: Akhil PB
>Priority: Major
> Attachments: YARN-7824.001.patch
>
>
> Steps:
> 1) Launch Httpd example
> 2) Visit component Instance page for httpd-proxy-0
> This page has information regarding httpd-proxy-0 component.
> This page should also include a link to container logs for this component
> h2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7761) [UI2] Clicking 'master container log' or 'Link' next to 'log' under application's appAttempt goes to Old UI's Log link

2019-02-14 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-7761:
-
Fix Version/s: 3.1.3
   3.2.1

> [UI2] Clicking 'master container log' or 'Link' next to 'log' under 
> application's appAttempt goes to Old UI's Log link
> --
>
> Key: YARN-7761
> URL: https://issues.apache.org/jira/browse/YARN-7761
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sumana Sathish
>Assignee: Akhil PB
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: YARN-7761-branch-3.2.001.patch, YARN-7761.001.patch, 
> YARN-7761.002.patch, YARN-7761.003.patch
>
>
> Clicking 'master container log' or 'Link' next to 'Log' under application's 
> appAttempt goes to Old UI's Log link



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9118) Handle issues with parsing user defined GPU devices in GpuDiscoverer

2019-02-14 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768443#comment-16768443
 ] 

Sunil Govindan commented on YARN-9118:
--

Thanks [~snemeth]

Could you please add  the Missing package-info.java file.

Also please check if other checkstyle issues can be fixed as well. I am not a 
big fan of SuppressWarnings. So if this length cant be fixed, you can leave it 
like that.

 

 

> Handle issues with parsing user defined GPU devices in GpuDiscoverer
> 
>
> Key: YARN-9118
> URL: https://issues.apache.org/jira/browse/YARN-9118
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9118.001.patch, YARN-9118.002.patch, 
> YARN-9118.003.patch, YARN-9118.004.patch, YARN-9118.005.patch, 
> YARN-9118.006.patch, YARN-9118.007.patch, YARN-9118.008.patch
>
>
> getGpusUsableByYarn has the following issues: 
> - Duplicate GPU device definitions are not denied: This seems to be the 
> biggest issue as it could increase the number of devices on the node if the 
> device ID is defined 2 or more times.
> - An empty-string is accepted, it works like the user would not want to use 
> auto-discovery and haven't defined any GPU devices: This will result in an 
> empty device list, but the empty-string check is never explicitly there in 
> the code, so this behavior just coincidental.
> - Number validation does not happen on GPU device IDs (separated by commas)
> Many testcases are added as the coverage was already very low.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9235) If linux container executor is not set for a GPU cluster GpuResourceHandlerImpl is not initialized and NPE is thrown

2019-02-14 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768438#comment-16768438
 ] 

Sunil Govindan commented on YARN-9235:
--

Yes. I agree to [~pbacsko]

Pls help to add a test case here. Other than that, approach seems fine.

> If linux container executor is not set for a GPU cluster 
> GpuResourceHandlerImpl is not initialized and NPE is thrown
> 
>
> Key: YARN-9235
> URL: https://issues.apache.org/jira/browse/YARN-9235
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Antal Bálint Steinbach
>Assignee: Antal Bálint Steinbach
>Priority: Major
> Attachments: YARN-9235.001.patch
>
>
> If GPU plugin is enabled for the NodeManager, it is possible to run jobs with 
> GPU.
> However, if LinuxContainerExecutor is not configured, an NPE is thrown when 
> calling 
> {code:java}
> GpuResourcePlugin.getNMResourceInfo{code}
> Also, there are no warns in the log if GPU is misconfigured like this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9266) Various fixes are needed in IntelFpgaOpenclPlugin

2019-02-14 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768437#comment-16768437
 ] 

Peter Bacsko commented on YARN-9266:


Thanks [~adam.antal] I'll go through your suggestions one-by-one and I'll make 
the modifications if necessary.

> Various fixes are needed in IntelFpgaOpenclPlugin
> -
>
> Key: YARN-9266
> URL: https://issues.apache.org/jira/browse/YARN-9266
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9266-001.patch, YARN-9266-002.patch, 
> YARN-9266-003.patch, YARN-9266-004.patch
>
>
> Problems identified in this class:
>  * {{InnerShellExecutor}} ignores the timeout parameter
>  * {{configureIP()}} uses printStackTrace() instead of logging
>  * {{configureIP()}} does not log the output of aocl if the exit code != 0
>  * {{parseDiagnoseInfo()}} is too heavyweight – it should be in its own class 
> for better testability
>  * {{downloadIP()}} uses {{contains()}} for file name check – this can really 
> surprise users in some cases (eg. you want to use hello.aocx but hello2.aocx 
> also matches)
>  * method name {{downloadIP()}} is misleading – it actually tries to finds 
> the file. Everything is downloaded (localized) at this point.
>  * {{@VisibleForTesting}} methods should be package private
>  * {{aliasMap}} is not needed - store the acl number in the {{FpgaDevice}} 
> class
>  * checkstyle fixes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9270) Minor cleanup in TestFpgaDiscoverer

2019-02-14 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768391#comment-16768391
 ] 

Adam Antal commented on YARN-9270:
--

After that 1 nasty remained checkstyle error it's +1 from me (non-binding).

> Minor cleanup in TestFpgaDiscoverer
> ---
>
> Key: YARN-9270
> URL: https://issues.apache.org/jira/browse/YARN-9270
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9270-001.patch, YARN-9270-002.patch, 
> YARN-9270-003.patch
>
>
> Let's do some cleanup in this class.
> * {{testLinuxFpgaResourceDiscoverPluginConfig}} - this test should be split 
> up to 5 different tests, because it tests 5 different scenarios.
> * remove {{setNewEnvironmentHack()}} - too complicated. We can introduce a 
> {{Function}} in the plugin class like {{Function envProvider 
> = System::getenv()}} plus a setter method which allows the test to modify 
> {{envProvider}}. Much simpler and straightfoward.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9266) Various fixes are needed in IntelFpgaOpenclPlugin

2019-02-14 Thread Peter Bacsko (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-9266:
---
Description: 
Problems identified in this class:
 * {{InnerShellExecutor}} ignores the timeout parameter
 * {{configureIP()}} uses printStackTrace() instead of logging
 * {{configureIP()}} does not log the output of aocl if the exit code != 0
 * {{parseDiagnoseInfo()}} is too heavyweight – it should be in its own class 
for better testability
 * {{downloadIP()}} uses {{contains()}} for file name check – this can really 
surprise users in some cases (eg. you want to use hello.aocx but hello2.aocx 
also matches)
 * method name {{downloadIP()}} is misleading – it actually tries to finds the 
file. Everything is downloaded (localized) at this point.
 * {{@VisibleForTesting}} methods should be package private
 * {{aliasMap}} is not needed - store the acl number in the {{FpgaDevice}} class
 * checkstyle fixes

  was:
Problems identified in this class:
 * {{InnerShellExecutor}} ignores the timeout parameter
 * {{configureIP()}} uses printStackTrace() instead of logging
 * {{configureIP()}} does not log the output of aocl if the exit code != 0
 * {{parseDiagnoseInfo()}} is too heavyweight – it should be in its own class 
for better testability
 * {{downloadIP()}} uses {{contains()}} for file name check – this can really 
surprise users in some cases (eg. you want to use hello.aocx but hello2.aocx 
also matches)
 * method name {{downloadIP()}} is misleading – it actually tries to finds the 
file. Everything is downloaded (localized) at this point.
 * {{@VisibleForTesting}} methods should be package private
 * {{aliasMap}} is not needed - store the acl number in the {{FpgaDevice}} class


> Various fixes are needed in IntelFpgaOpenclPlugin
> -
>
> Key: YARN-9266
> URL: https://issues.apache.org/jira/browse/YARN-9266
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9266-001.patch, YARN-9266-002.patch, 
> YARN-9266-003.patch, YARN-9266-004.patch
>
>
> Problems identified in this class:
>  * {{InnerShellExecutor}} ignores the timeout parameter
>  * {{configureIP()}} uses printStackTrace() instead of logging
>  * {{configureIP()}} does not log the output of aocl if the exit code != 0
>  * {{parseDiagnoseInfo()}} is too heavyweight – it should be in its own class 
> for better testability
>  * {{downloadIP()}} uses {{contains()}} for file name check – this can really 
> surprise users in some cases (eg. you want to use hello.aocx but hello2.aocx 
> also matches)
>  * method name {{downloadIP()}} is misleading – it actually tries to finds 
> the file. Everything is downloaded (localized) at this point.
>  * {{@VisibleForTesting}} methods should be package private
>  * {{aliasMap}} is not needed - store the acl number in the {{FpgaDevice}} 
> class
>  * checkstyle fixes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9304) Improve various tests in YARN

2019-02-14 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768382#comment-16768382
 ] 

Adam Antal commented on YARN-9304:
--

Probably there'll be needed some logs in the not yet created 
AoclDiagnosticOutputParser.java, after YARN-9266 has been committed.

> Improve various tests in YARN
> -
>
> Key: YARN-9304
> URL: https://issues.apache.org/jira/browse/YARN-9304
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.2.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
>
> There are some recent refactors in yarn, especially in the GPU/FPGA field 
> (YARN-9087, YARN-9134, YARN-9133, YARN-9123, YARN-9100, YARN-9100). The code 
> quality and the lack of logging makes it harder to fix tests. This issue aims 
> to add further logging to the testcases and do some minor cleanups, similarly 
> those example jiras. 
> (Further subtasks can be added later to separate the test classes.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9266) Various fixes are needed in IntelFpgaOpenclPlugin

2019-02-14 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768375#comment-16768375
 ] 

Adam Antal commented on YARN-9266:
--

Thanks for the patch, [~pbacsko]! It looks good overall, and hope this can be 
committed soon as other issues are depending on it.

{panel:title=General comments}
- Some of the refactors that you performed are not included in the description 
of the jira. Could you update it? I'm thinking of for example moving the parser 
to separate file and fixing checkstyles.
- Actually fixing checkstyles in general should be avoided as it's making the 
backports harder and making the git history dirtier. If you could static import 
the assertTrue/False/Equals functions, replace the 
assert.AssertTrue/False/Equals ones and ALSO fixing checkstyles, I can get away 
with that.
{panel}

{panel:title=FpgaResourceHandlerImpl.java}
- Can we move the comments in function preStart to a javadoc?
{panel}

{panel:title=TestFpgaResourceHandler.java}
- Wildcard imports:
-- {{java.util.*}} -> only used List, ArrayList, Map, HashMap.
-- From {{org.mockito.mockito}} 10 different class is used: mock, when, verify, 
times, anyString, anyMap, anyList, never, eq, atLeastOnce. As I see it in other 
files, this is acceptable to use the wildcard character here.
-- From {{org.apache.hadoop.yarn.api.records}} we use 6 classes. It is 
acceptable to have a wildcard character here as well, but separate imports 
would look nicer. (Classes: ContainerId, ResourceInformation, Resource, 
ContainerLaunchContext, ApplicationAttemptId, ApplicationId)
- There's an extra space before import {{java.io.IOException}}
- In this file we could also static import assert.
{panel}

{panel:title=AoclDiagnosticOutputParser.java}
- In line 105 can we replace the null and section in the following condition:
{code:java}
if (section == null)
{code}
- Can we make parseDiagnosticOutput function static? Thus we can avoid 
constructing an empty, stateless parser object each time we want to parse the 
output of the diagnostic.
- For sake of completeness, I suggest adding some logging there. In case 
something fails, we can get some partial look into the parser by the logging 
(I'm thinking of situations like we could parse 2 parts of the diagnostic 
output and failing at the 3rd). It can make debugging and fixing the parser 
easier, if the diagnostic output changes, which is a potential problem. I'm not 
forcing to do that in this jira, maybe we can add this to YARN-9304 after the 
issue has been committed.
{panel}

{panel:title=IntelFpgaOpenclPlugin.java}
- {{java.util.*}}
- Do we need the conf Configuration object of {{AbstractFpgaVendorPlugin}} at 
all? It is not used anywhere, and we're using Configuration object only in 
initPlugin where we use the parameter, and the class variable.
- In line 171: as I see we do not use the variable e in {{catch (IOException 
e)}}? If it is important, let's log it, if not, then replace "e" with "ignore" 
indicating we do not intend to use it.
- In line 172: no need to keep the String msg variable, we just have to log it.
{panel}

> Various fixes are needed in IntelFpgaOpenclPlugin
> -
>
> Key: YARN-9266
> URL: https://issues.apache.org/jira/browse/YARN-9266
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9266-001.patch, YARN-9266-002.patch, 
> YARN-9266-003.patch, YARN-9266-004.patch
>
>
> Problems identified in this class:
>  * {{InnerShellExecutor}} ignores the timeout parameter
>  * {{configureIP()}} uses printStackTrace() instead of logging
>  * {{configureIP()}} does not log the output of aocl if the exit code != 0
>  * {{parseDiagnoseInfo()}} is too heavyweight – it should be in its own class 
> for better testability
>  * {{downloadIP()}} uses {{contains()}} for file name check – this can really 
> surprise users in some cases (eg. you want to use hello.aocx but hello2.aocx 
> also matches)
>  * method name {{downloadIP()}} is misleading – it actually tries to finds 
> the file. Everything is downloaded (localized) at this point.
>  * {{@VisibleForTesting}} methods should be package private
>  * {{aliasMap}} is not needed - store the acl number in the {{FpgaDevice}} 
> class



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9135) NM State store ResourceMappings serialization are tested with Strings instead of real Device objects

2019-02-14 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768360#comment-16768360
 ] 

Peter Bacsko commented on YARN-9135:


{quote}Using the immutable type in the definition of the methods informs 
clients that the can't modify the collection{quote}

What I found is that people have different preference over this: 
[https://stackoverflow.com/questions/38087900/is-it-better-to-return-an-immutablemap-or-a-map]

I'm still leaning towards returning a more generic Map. BTW if you use 
{{Collections.unmodifiableMap()}} like this, it behaves the same way:

{{Map copy = Collections.unmodifiableMap(new HashMap(original));  // returns Map}}

Anyway I'm ok to +1 it if you want to keep it.

> NM State store ResourceMappings serialization are tested with Strings instead 
> of real Device objects
> 
>
> Key: YARN-9135
> URL: https://issues.apache.org/jira/browse/YARN-9135
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9135.001.patch, YARN-9135.003.patch, 
> YARN-9135.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9138) Test error handling of nvidia-smi binary execution of GpuDiscoverer

2019-02-14 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768340#comment-16768340
 ] 

Peter Bacsko commented on YARN-9138:


+1 non-binding

> Test error handling of nvidia-smi binary execution of GpuDiscoverer
> ---
>
> Key: YARN-9138
> URL: https://issues.apache.org/jira/browse/YARN-9138
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9138.001.patch, YARN-9138.002.patch, 
> YARN-9138.003.patch
>
>
> The code that executes nvidia-smi (doing GPU device auto-discovery) don't 
> have much test coverage.
> This patch adds tests to this part of the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9139) Simplify initializer code of GpuDiscoverer

2019-02-14 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768333#comment-16768333
 ] 

Peter Bacsko commented on YARN-9139:


+1 non-binding

> Simplify initializer code of GpuDiscoverer
> --
>
> Key: YARN-9139
> URL: https://issues.apache.org/jira/browse/YARN-9139
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9139.001.patch, YARN-9139.002.patch, 
> YARN-9139.003.patch, YARN-9139.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9235) If linux container executor is not set for a GPU cluster GpuResourceHandlerImpl is not initialized and NPE is thrown

2019-02-14 Thread JIRA


[ 
https://issues.apache.org/jira/browse/YARN-9235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768325#comment-16768325
 ] 

Antal Bálint Steinbach commented on YARN-9235:
--

Hi [~sunilg] ,

I uploaded a very simple patch.

> If linux container executor is not set for a GPU cluster 
> GpuResourceHandlerImpl is not initialized and NPE is thrown
> 
>
> Key: YARN-9235
> URL: https://issues.apache.org/jira/browse/YARN-9235
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Antal Bálint Steinbach
>Assignee: Antal Bálint Steinbach
>Priority: Major
> Attachments: YARN-9235.001.patch
>
>
> If GPU plugin is enabled for the NodeManager, it is possible to run jobs with 
> GPU.
> However, if LinuxContainerExecutor is not configured, an NPE is thrown when 
> calling 
> {code:java}
> GpuResourcePlugin.getNMResourceInfo{code}
> Also, there are no warns in the log if GPU is misconfigured like this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9235) If linux container executor is not set for a GPU cluster GpuResourceHandlerImpl is not initialized and NPE is thrown

2019-02-14 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768331#comment-16768331
 ] 

Peter Bacsko commented on YARN-9235:


[~bsteinbach] can you add a simple unit test for this scenario?

> If linux container executor is not set for a GPU cluster 
> GpuResourceHandlerImpl is not initialized and NPE is thrown
> 
>
> Key: YARN-9235
> URL: https://issues.apache.org/jira/browse/YARN-9235
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Antal Bálint Steinbach
>Assignee: Antal Bálint Steinbach
>Priority: Major
> Attachments: YARN-9235.001.patch
>
>
> If GPU plugin is enabled for the NodeManager, it is possible to run jobs with 
> GPU.
> However, if LinuxContainerExecutor is not configured, an NPE is thrown when 
> calling 
> {code:java}
> GpuResourcePlugin.getNMResourceInfo{code}
> Also, there are no warns in the log if GPU is misconfigured like this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9123) Clean up and split testcases in TestNMWebServices for GPU support

2019-02-14 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-9123:
-
Attachment: YARN-9123.006.patch

> Clean up and split testcases in TestNMWebServices for GPU support
> -
>
> Key: YARN-9123
> URL: https://issues.apache.org/jira/browse/YARN-9123
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-9123.001.patch, YARN-9123.002.patch, 
> YARN-9123.003.patch, YARN-9123.004.patch, YARN-9123.005.patch, 
> YARN-9123.006.patch
>
>
> The following testcases can be cleaned up a bit: 
> TestNMWebServices#testGetNMResourceInfo - Can be split up to 3 different cases
> TestNMWebServices#testGetYarnGpuResourceInfo



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9270) Minor cleanup in TestFpgaDiscoverer

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768283#comment-16768283
 ] 

Hadoop QA commented on YARN-9270:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 169 unchanged - 15 fixed = 170 total (was 184) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
46s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9270 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958713/YARN-9270-003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 27418ea4a2bd 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / dfe0f42 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/23405/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23405/testReport/ |
| Max. process+thread count | 306 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 

[jira] [Commented] (YARN-9123) Clean up and split testcases in TestNMWebServices for GPU support

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768176#comment-16768176
 ] 

Hadoop QA commented on YARN-9123:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 5 unchanged - 1 fixed = 6 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 36s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.nodemanager.webapp.TestNMWebServices |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9123 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958710/YARN-9123.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b49b623e9056 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / dfe0f42 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/23404/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 

[jira] [Commented] (YARN-9264) [Umbrella] Follow-up on IntelOpenCL FPGA plugin

2019-02-14 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768158#comment-16768158
 ] 

Peter Bacsko commented on YARN-9264:


Suggested order of committing the patches: YARN-9265 and YARN-9266 should go in 
first. Then I'll verify them on a local machine with an FPGA card.

If everything is OK, we can proceed with rest.

> [Umbrella] Follow-up on IntelOpenCL FPGA plugin
> ---
>
> Key: YARN-9264
> URL: https://issues.apache.org/jira/browse/YARN-9264
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
>
> The Intel FPGA resource type support was released in Hadoop 3.1.0.
> Right now the plugin implementation has some deficiencies that need to be 
> fixed. This JIRA lists all problems that need to be resolved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9139) Simplify initializer code of GpuDiscoverer

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768183#comment-16768183
 ] 

Hadoop QA commented on YARN-9139:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 223 unchanged - 2 fixed = 223 total (was 225) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
57s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9139 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958704/YARN-9139.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a471834ef55e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / dfe0f42 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 

[jira] [Commented] (YARN-9123) Clean up and split testcases in TestNMWebServices for GPU support

2019-02-14 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768160#comment-16768160
 ] 

Szilard Nemeth commented on YARN-9123:
--

Thanks [~adam.antal]

> Clean up and split testcases in TestNMWebServices for GPU support
> -
>
> Key: YARN-9123
> URL: https://issues.apache.org/jira/browse/YARN-9123
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-9123.001.patch, YARN-9123.002.patch, 
> YARN-9123.003.patch, YARN-9123.004.patch, YARN-9123.005.patch
>
>
> The following testcases can be cleaned up a bit: 
> TestNMWebServices#testGetNMResourceInfo - Can be split up to 3 different cases
> TestNMWebServices#testGetYarnGpuResourceInfo



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9269) Minor cleanup in FpgaResourceAllocator

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768154#comment-16768154
 ] 

Hadoop QA commented on YARN-9269:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 50 unchanged - 8 fixed = 50 total (was 58) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 
13s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9269 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958705/YARN-9269-003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0c5593d0a8da 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / dfe0f42 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23402/testReport/ |
| Max. process+thread count | 305 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 

[jira] [Commented] (YARN-9138) Test error handling of nvidia-smi binary execution of GpuDiscoverer

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768153#comment-16768153
 ] 

Hadoop QA commented on YARN-9138:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 9 unchanged - 2 fixed = 9 total (was 11) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m  
2s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9138 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958706/YARN-9138.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d3cd15d99c26 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / dfe0f42 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/23403/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/23403/console |
| Powered by | Apache Yetus 0.8.0  

[jira] [Updated] (YARN-9270) Minor cleanup in TestFpgaDiscoverer

2019-02-14 Thread Peter Bacsko (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-9270:
---
Attachment: YARN-9270-003.patch

> Minor cleanup in TestFpgaDiscoverer
> ---
>
> Key: YARN-9270
> URL: https://issues.apache.org/jira/browse/YARN-9270
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9270-001.patch, YARN-9270-002.patch, 
> YARN-9270-003.patch
>
>
> Let's do some cleanup in this class.
> * {{testLinuxFpgaResourceDiscoverPluginConfig}} - this test should be split 
> up to 5 different tests, because it tests 5 different scenarios.
> * remove {{setNewEnvironmentHack()}} - too complicated. We can introduce a 
> {{Function}} in the plugin class like {{Function envProvider 
> = System::getenv()}} plus a setter method which allows the test to modify 
> {{envProvider}}. Much simpler and straightfoward.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9270) Minor cleanup in TestFpgaDiscoverer

2019-02-14 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768139#comment-16768139
 ] 

Peter Bacsko commented on YARN-9270:


Patch v3 - handled checkstyle issues.

> Minor cleanup in TestFpgaDiscoverer
> ---
>
> Key: YARN-9270
> URL: https://issues.apache.org/jira/browse/YARN-9270
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9270-001.patch, YARN-9270-002.patch, 
> YARN-9270-003.patch
>
>
> Let's do some cleanup in this class.
> * {{testLinuxFpgaResourceDiscoverPluginConfig}} - this test should be split 
> up to 5 different tests, because it tests 5 different scenarios.
> * remove {{setNewEnvironmentHack()}} - too complicated. We can introduce a 
> {{Function}} in the plugin class like {{Function envProvider 
> = System::getenv()}} plus a setter method which allows the test to modify 
> {{envProvider}}. Much simpler and straightfoward.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9269) Minor cleanup in FpgaResourceAllocator

2019-02-14 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768129#comment-16768129
 ] 

Adam Antal commented on YARN-9269:
--

Perfect, +1 (non-binding).

> Minor cleanup in FpgaResourceAllocator
> --
>
> Key: YARN-9269
> URL: https://issues.apache.org/jira/browse/YARN-9269
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9269-001.patch, YARN-9269-002.patch, 
> YARN-9269-003.patch
>
>
> Some stuff that we observed:
>  * {{addFpga()}} - we check for duplicate devices, but we don't print any 
> error/warning if there's any.
>  * {{findMatchedFpga()}} should be called {{findMatchingFpga()}}. Also, is 
> this method even needed? We already receive an {{FpgaDevice}} instance in 
> {{updateFpga()}} which I believe is the same that we're looking up.
>  * variable {{IPIDpreference}} is confusing
>  * {{availableFpga}} / {{usedFpgaByRequestor}} are instances of 
> {{LinkedHashMap}}. What's the rationale behind this? Doesn't a simple 
> {{HashMap}} suffice?
>  * {{usedFpgaByRequestor}} should be renamed, naming is a bit unclear
>  * {{allowedFpgas}} should be an immutable list
>  * {{@VisibleForTesting}} methods should be package private
>  * get rid of {{*}} imports



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9305) Add logging to TestNMWebServices

2019-02-14 Thread Adam Antal (JIRA)
Adam Antal created YARN-9305:


 Summary: Add logging to TestNMWebServices
 Key: YARN-9305
 URL: https://issues.apache.org/jira/browse/YARN-9305
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: yarn
Affects Versions: 3.2.0
Reporter: Adam Antal


There is an ongoing cleanup in TestNMWebServices (YARN-9123). This issue 
connects to that: add logging to the testcase in order to have traceable runs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9123) Clean up and split testcases in TestNMWebServices for GPU support

2019-02-14 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768127#comment-16768127
 ] 

Adam Antal commented on YARN-9123:
--

+1 (non-binding).

I filed YARN-9304 for my concerns.

> Clean up and split testcases in TestNMWebServices for GPU support
> -
>
> Key: YARN-9123
> URL: https://issues.apache.org/jira/browse/YARN-9123
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-9123.001.patch, YARN-9123.002.patch, 
> YARN-9123.003.patch, YARN-9123.004.patch, YARN-9123.005.patch
>
>
> The following testcases can be cleaned up a bit: 
> TestNMWebServices#testGetNMResourceInfo - Can be split up to 3 different cases
> TestNMWebServices#testGetYarnGpuResourceInfo



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9304) Improve various tests in YARN

2019-02-14 Thread Adam Antal (JIRA)
Adam Antal created YARN-9304:


 Summary: Improve various tests in YARN
 Key: YARN-9304
 URL: https://issues.apache.org/jira/browse/YARN-9304
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: yarn
Affects Versions: 3.2.0
Reporter: Adam Antal
Assignee: Adam Antal


There are some recent refactors in yarn, especially in the GPU/FPGA field 
(YARN-9087, YARN-9134, YARN-9133, YARN-9123, YARN-9100, YARN-9100). The code 
quality and the lack of logging makes it harder to fix tests. This issue aims 
to add further logging to the testcases and do some minor cleanups, similarly 
those example jiras. 

(Further subtasks can be added later to separate the test classes.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9123) Clean up and split testcases in TestNMWebServices for GPU support

2019-02-14 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768114#comment-16768114
 ] 

Szilard Nemeth commented on YARN-9123:
--

Hi [~adam.antal] and [~pbacsko]!
Thanks for your review comments, I fixed all the issues, please see the new 
patch!

> Clean up and split testcases in TestNMWebServices for GPU support
> -
>
> Key: YARN-9123
> URL: https://issues.apache.org/jira/browse/YARN-9123
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-9123.001.patch, YARN-9123.002.patch, 
> YARN-9123.003.patch, YARN-9123.004.patch, YARN-9123.005.patch
>
>
> The following testcases can be cleaned up a bit: 
> TestNMWebServices#testGetNMResourceInfo - Can be split up to 3 different cases
> TestNMWebServices#testGetYarnGpuResourceInfo



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9123) Clean up and split testcases in TestNMWebServices for GPU support

2019-02-14 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-9123:
-
Attachment: YARN-9123.005.patch

> Clean up and split testcases in TestNMWebServices for GPU support
> -
>
> Key: YARN-9123
> URL: https://issues.apache.org/jira/browse/YARN-9123
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-9123.001.patch, YARN-9123.002.patch, 
> YARN-9123.003.patch, YARN-9123.004.patch, YARN-9123.005.patch
>
>
> The following testcases can be cleaned up a bit: 
> TestNMWebServices#testGetNMResourceInfo - Can be split up to 3 different cases
> TestNMWebServices#testGetYarnGpuResourceInfo



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7761) [UI2] Clicking 'master container log' or 'Link' next to 'log' under application's appAttempt goes to Old UI's Log link

2019-02-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16768107#comment-16768107
 ] 

Hadoop QA commented on YARN-7761:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 4s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
35m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:63396be |
| JIRA Issue | YARN-7761 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956532/YARN-7761-branch-3.2.001.patch
 |
| Optional Tests |  dupname  asflicense  shadedclient  |
| uname | Linux d1449fb719f5 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.2 / f4b9ba2 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 339 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/23400/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [UI2] Clicking 'master container log' or 'Link' next to 'log' under 
> application's appAttempt goes to Old UI's Log link
> --
>
> Key: YARN-7761
> URL: https://issues.apache.org/jira/browse/YARN-7761
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sumana Sathish
>Assignee: Akhil PB
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-7761-branch-3.2.001.patch, YARN-7761.001.patch, 
> YARN-7761.002.patch, YARN-7761.003.patch
>
>
> Clicking 'master container log' or 'Link' next to 'Log' under application's 
> appAttempt goes to Old UI's Log link



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >