[jira] [Updated] (YARN-9038) Add ability to publish/unpublish volumes on node managers

2018-12-06 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-9038:
--
Attachment: YARN-9038.001.patch

> Add ability to publish/unpublish volumes on node managers
> -
>
> Key: YARN-9038
> URL: https://issues.apache.org/jira/browse/YARN-9038
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-9038.001.patch
>
>
> We need to add ability to publish volumes on node managers in staging area, 
> under NM's local dir. And then mount the path to docker container to make it 
> visible in the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9038) Add ability to publish/unpublish volumes on node managers

2018-12-06 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-9038:
--
Summary: Add ability to publish/unpublish volumes on node managers  (was: 
Add ability to publish volumes on node managers)

> Add ability to publish/unpublish volumes on node managers
> -
>
> Key: YARN-9038
> URL: https://issues.apache.org/jira/browse/YARN-9038
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-9038.001.patch
>
>
> We need to add ability to publish volumes on node managers in staging area, 
> under NM's local dir. And then mount the path to docker container to make it 
> visible in the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8940) [CSI] Add volume as a top-level attribute in service spec

2018-12-06 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8940:
--
Summary: [CSI] Add volume as a top-level attribute in service spec   (was: 
Add volume as a top-level attribute in service spec )

> [CSI] Add volume as a top-level attribute in service spec 
> --
>
> Key: YARN-8940
> URL: https://issues.apache.org/jira/browse/YARN-8940
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
>
> Initial thought:
> {noformat}
> {
>   "name": "volume example",
>   "version": "1.0.0",
>   "description": "a volume simple example",
>   "components" :
> [
>   {
> "name": "",
> "number_of_containers": 1,
> "artifact": {
>   "id": "docker.io/centos:latest",
>   "type": "DOCKER"
> },
> "launch_command": "sleep,120",
> "configuration": {
>   "env": {
> "YARN_CONTAINER_RUNTIME_DOCKER_RUN_OVERRIDE_DISABLE":"true"
>   }
> },
> "resource": {
>   "cpus": 1,
>   "memory": "256",
> },
> "volumes": [
>   {
> "volume" : {
>   "type": "s3_csi",
>   "id": "5504d4a8-b246-11e8-94c2-026b17aa1190",
>   "capability" : {
> "min": "5Gi",
> "max": "100Gi"
>   },
>   "source_path": "s3://my_bucket/my", # optional for object stores
>   "mount_path": "/mnt/data", # required, the mount point in 
> docker container
>   "access_mode": "SINGLE_READ", # how the volume can be accessed
> }
>   }
> ]
>   }
> }
>   ]
> }
> {noformat}
> Open for discussion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9086) [CSI] Run csi-driver-adaptor as aux service

2018-12-06 Thread Weiwei Yang (JIRA)
Weiwei Yang created YARN-9086:
-

 Summary: [CSI] Run csi-driver-adaptor as aux service
 Key: YARN-9086
 URL: https://issues.apache.org/jira/browse/YARN-9086
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Weiwei Yang
Assignee: Weiwei Yang


Since the csi-driver-adaptor's runtime depends on protobuf3, we need to run it 
with a seperate class loader. Aux service provides such ability, this ticket is 
tracking the effort to run the adaptors as NM's aux services.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9038) [CSI] Add ability to publish/unpublish volumes on node managers

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711322#comment-16711322
 ] 

Hadoop QA commented on YARN-9038:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-yarn-csi in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  7m  
3s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  7m  3s{color} | 
{color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  7m  3s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 30s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 14 new + 258 unchanged - 0 fixed = 272 total (was 258) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-yarn-csi in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
11s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-yarn-csi in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-csi 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
46s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
25s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 45s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
16s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 29s{color} 
| 

[jira] [Updated] (YARN-8822) Nvidia-docker v2 support

2018-12-06 Thread Charo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charo Zhang updated YARN-8822:
--
Attachment: YARN-8822.003.patch

> Nvidia-docker v2 support
> 
>
> Key: YARN-8822
> URL: https://issues.apache.org/jira/browse/YARN-8822
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.1
>Reporter: Zhankun Tang
>Assignee: Charo Zhang
>Priority: Critical
>  Labels: Docker
> Attachments: YARN-8822-branch-3.1.1.001.patch, YARN-8822.001.patch, 
> YARN-8822.002.patch, YARN-8822.003.patch
>
>
> To run a GPU container with Docker, we have nvdia-docker v1 support already 
> but is deprecated per 
> [here|https://github.com/NVIDIA/nvidia-docker/wiki/About-version-2.0]. We 
> should support nvdia-docker v2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9057) [CSI] CSI jar file should not bundle third party dependencies

2018-12-06 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-9057:
--
Summary: [CSI] CSI jar file should not bundle third party dependencies  
(was: CSI jar file should not bundle third party dependencies)

> [CSI] CSI jar file should not bundle third party dependencies
> -
>
> Key: YARN-9057
> URL: https://issues.apache.org/jira/browse/YARN-9057
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Weiwei Yang
>Priority: Blocker
>  Labels: CSI
> Fix For: 3.3.0
>
> Attachments: YARN-9057.001.patch, YARN-9057.002.patch
>
>
> hadoop-yarn-csi-3.3.0-SNAPSHOT.jar bundles all third party classes like a 
> shaded jar instead of CSI only classes.  This is generating error messages 
> for YARN cli:
> {code}
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/local/hadoop-3.3.0-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/local/hadoop-3.3.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-csi-3.3.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9051) Integrate multiple CustomResourceTypesConfigurationProvider implementations into one

2018-12-06 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711908#comment-16711908
 ] 

Szilard Nemeth commented on YARN-9051:
--

Thanks [~haibochen] for your review!

Fixed all the points you mentioned. 

I hope the patch looks good now.

Thanks!

> Integrate multiple CustomResourceTypesConfigurationProvider implementations 
> into one
> 
>
> Key: YARN-9051
> URL: https://issues.apache.org/jira/browse/YARN-9051
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-9051.001.patch, YARN-9051.002.patch
>
>
> CustomResourceTypesConfigurationProvider (extends LocalConfigurationProvider) 
> has 5 implementations on trunk nowadays.
> These could be integrated into 1 common class.
> Also, 
> {{org.apache.hadoop.yarn.util.resource.TestResourceUtils#addNewTypesToResources}}
>  has similar functionality so this can be considered as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9051) Integrate multiple CustomResourceTypesConfigurationProvider implementations into one

2018-12-06 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-9051:
-
Attachment: YARN-9051.002.patch

> Integrate multiple CustomResourceTypesConfigurationProvider implementations 
> into one
> 
>
> Key: YARN-9051
> URL: https://issues.apache.org/jira/browse/YARN-9051
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-9051.001.patch, YARN-9051.002.patch
>
>
> CustomResourceTypesConfigurationProvider (extends LocalConfigurationProvider) 
> has 5 implementations on trunk nowadays.
> These could be integrated into 1 common class.
> Also, 
> {{org.apache.hadoop.yarn.util.resource.TestResourceUtils#addNewTypesToResources}}
>  has similar functionality so this can be considered as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8738) FairScheduler configures maxResources or minResources as negative, the value parse to a positive number.

2018-12-06 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712039#comment-16712039
 ] 

Haibo Chen commented on YARN-8738:
--

Thanks [~snemeth] for the elaboration. What I meant is that because only 
findPercentage() throws AllocationConfiurationException per the comment in the 
code,  we can throw AllocationConfigurationException too for negative values, 
and we can update the parameter of createConfigException to see the value must 
be a non-positive number.  Note that in findPercentage() we have different 
messages for different types of issues. The message should be sufficient enough 
to tell the user what exactly the issue is.
{code:java}
  private static ConfigurableResource parseNewStyleResource(String value,
  long missing) throws AllocationConfigurationException {


} catch (AllocationConfigurationException ex) {
    // This only comes from findPercentage()
    throw createConfigException(value, "The "
    + "resource values must all be percentages. \""
    + resourceValue + "\" is either not a number or does not "
    + "include the '%' symbol.", ex);
  }
    }
    return configurableResource;

  }{code}

> FairScheduler configures maxResources or minResources as negative, the value 
> parse to a positive number.
> 
>
> Key: YARN-8738
> URL: https://issues.apache.org/jira/browse/YARN-8738
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.2.0
>Reporter: Sen Zhao
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-8738.001.patch, YARN-8738.002.patch
>
>
> If maxResources or minResources is configured as a negative number, the value 
> will be positive after parsing.
> If this is a problem, I will fix it. If not, the 
> FairSchedulerConfiguration#parseNewStyleResource parse negative number should 
> be same with parseOldStyleResource .
> cc:[~templedf], [~leftnoteasy]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9085) Guaranteed and MaxCapacity CSQueueMetrics

2018-12-06 Thread Jonathan Hung (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-9085:

Attachment: YARN-9085.002.patch

> Guaranteed and MaxCapacity CSQueueMetrics
> -
>
> Key: YARN-9085
> URL: https://issues.apache.org/jira/browse/YARN-9085
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.3
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9085.001.patch, YARN-9085.002.patch
>
>
> Would be useful to have Absolute Capacity/Absolute Max Capacity for queues to 
> compare against allocated/pending/etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9035) Allow better troubleshooting of FS container assignments and lack of container assignments

2018-12-06 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711943#comment-16711943
 ] 

Haibo Chen commented on YARN-9035:
--

Thanks [~snemeth] for the patch. This would be very helpful when it comes to 
debug scheduler decision, like you said. 

I think the current approach of creating new objects to represent Assignment or 
Validation results (AMShareLimitCheckResult) is a bit too heavy, given 
scheduling is executed very often so we should do things efficiently. I am in 
favor of doing just if(isOverAMShareLimit() && LOG.isDebugEnabled()) \{ 
LOG.debug(...);}

Plus, we need to turn on debug log for the new classes in order to get debug 
logs, extra things to do with no gain.

> Allow better troubleshooting of FS container assignments and lack of 
> container assignments
> --
>
> Key: YARN-9035
> URL: https://issues.apache.org/jira/browse/YARN-9035
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9035.001.patch
>
>
> The call chain started from {{FairScheduler.attemptScheduling}}, to 
> {{FSQueue}} (parent / leaf).assignContainer and down to 
> {{FSAppAttempt#assignContainer}} has many calls and has many potential 
> conditions where {{Resources.none()}} can be returned, meaning container is 
> not allocated.
>  A bunch of these empty-assignments do not come with a debug log statement, 
> so it's very hard to tell what condition lead the {{FairScheduler}} to a 
> decision where containers are not allocated.
>  On top of that, in many places, it's difficult to tell either why a 
> container was allocated to an app attempt.
> The goal is to have a common place (i.e. class) that will do all the 
> loggings, so users conveniently can control all the logs if they are curious 
> why (and why not) container assigments happened.
>  Also, it would be handy if readers of the log could easily decide which 
> {{AppAttempt}} is the log record created for, in other words: every log 
> record should include the ID of the application / app attempt, if possible.
>  
> Details of implementation: 
>  As most of the already in-place debug messages were protected by a condition 
> that checks whether the debug level is enabled on loggers, I followed a 
> similar pattern. All the relevant log messages are created with the class 
> {{ResourceAssignment}}. 
>  This class is a wrapper for the assigned {{Resource}} object and has a 
> single logger, so clients should use its helper methods to create log 
> records. There is a helper method called {{shouldLogReservationActivity}} 
> that checks if DEBUG or TRACE level is activated on the logger. 
>  See the javadoc on this class for further information.
>  
> {{ResourceAssignment}} is also responsible for adding the app / appettempt ID 
> to every log record (with some exceptions).
>  A couple of check classes are introduced: They are responsible to run and 
> store results of checks that are dependency of a successful container 
> allocation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9088) Non-exclusive labels break QueueMetrics

2018-12-06 Thread Brandon Scheller (JIRA)
Brandon Scheller created YARN-9088:
--

 Summary: Non-exclusive labels break QueueMetrics
 Key: YARN-9088
 URL: https://issues.apache.org/jira/browse/YARN-9088
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacity scheduler, resourcemanager
Affects Versions: 2.8.5
Reporter: Brandon Scheller


QueueMetrics are broken (random/negative values) when non-exclusive labels are 
being used and unlabeled containers run on labeled nodes.

This is caused by the change in the patch here:

https://issues.apache.org/jira/browse/YARN-6467

It assumes that a container's label will be the same as the node's label that 
it is running on.

If you look within the patch, sometimes metrics are updated using the 
request.getNodeLabelExpression(). And sometimes they are updated using 
node.getPartition().

This means that in the case where the node is labeled while the request isn't, 
these metrics only get updated when referring to the default queue. This stops 
metrics from balancing out and results in incorrect and negative values in 
QueueMetrics. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8738) FairScheduler configures maxResources or minResources as negative, the value parse to a positive number.

2018-12-06 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711940#comment-16711940
 ] 

Szilard Nemeth commented on YARN-8738:
--

Thanks [~haibochen] for your comments!

The reason why I introduced the {{NegativeResourceDefinitionException}} is that 
I wanted to differentiate the exception message coming from 

FairSchedulerConfiguration#parseNewStyleResourceAsPercentage which calls 
findPercentage. 

Maybe I'm missing something, but how could I know from the thrown exception 
object after calling findPercentage if the value was negative or one of the 
calls creating a AllocationConfigurationException (in findPercentage) happened? 

As the code in FairSchedulerConfiguration#parseNewStyleResourceAsPercentage 
catches the instances of AllocationConfigurationExceptions and manipulates 
their message, I can't really introduce a different message for the negative 
value case at this level. It would have been possible only if I would introduce 
a boolean flag (or an enum) in AllocationConfigurationException, to indicate 
whether the value was a negative value, an invalid percentage value or a 
missing resource. But that would be way more hacky than introducing a new 
exception type, like I did.

One alternative I can think of is introducing an enum type for the 
AllocationConfigurationException, that could describe the nature of the config 
issue.

The alternative you suggested would mean throwing of exceptions for negative 
values in parseNewStyleResource and in 
parseResourceConfigValue(java.lang.String, long) too, which I don't like 
because for 2 reasons: exception handling will hapen in more places and the 
other reason: we don't handle exceptional cases right away, but doing it one or 
more levels up in the call hieararchy. I think for most cases, handling 
exceptions right away is a better approach.

 

What do you think? 

> FairScheduler configures maxResources or minResources as negative, the value 
> parse to a positive number.
> 
>
> Key: YARN-8738
> URL: https://issues.apache.org/jira/browse/YARN-8738
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.2.0
>Reporter: Sen Zhao
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-8738.001.patch, YARN-8738.002.patch
>
>
> If maxResources or minResources is configured as a negative number, the value 
> will be positive after parsing.
> If this is a problem, I will fix it. If not, the 
> FairSchedulerConfiguration#parseNewStyleResource parse negative number should 
> be same with parseOldStyleResource .
> cc:[~templedf], [~leftnoteasy]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9035) Allow better troubleshooting of FS container assignments and lack of container assignments

2018-12-06 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711944#comment-16711944
 ] 

Haibo Chen commented on YARN-9035:
--

[~wilfreds] probably has a much better idea of what is preferred from a 
support-ability perspective.

> Allow better troubleshooting of FS container assignments and lack of 
> container assignments
> --
>
> Key: YARN-9035
> URL: https://issues.apache.org/jira/browse/YARN-9035
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9035.001.patch
>
>
> The call chain started from {{FairScheduler.attemptScheduling}}, to 
> {{FSQueue}} (parent / leaf).assignContainer and down to 
> {{FSAppAttempt#assignContainer}} has many calls and has many potential 
> conditions where {{Resources.none()}} can be returned, meaning container is 
> not allocated.
>  A bunch of these empty-assignments do not come with a debug log statement, 
> so it's very hard to tell what condition lead the {{FairScheduler}} to a 
> decision where containers are not allocated.
>  On top of that, in many places, it's difficult to tell either why a 
> container was allocated to an app attempt.
> The goal is to have a common place (i.e. class) that will do all the 
> loggings, so users conveniently can control all the logs if they are curious 
> why (and why not) container assigments happened.
>  Also, it would be handy if readers of the log could easily decide which 
> {{AppAttempt}} is the log record created for, in other words: every log 
> record should include the ID of the application / app attempt, if possible.
>  
> Details of implementation: 
>  As most of the already in-place debug messages were protected by a condition 
> that checks whether the debug level is enabled on loggers, I followed a 
> similar pattern. All the relevant log messages are created with the class 
> {{ResourceAssignment}}. 
>  This class is a wrapper for the assigned {{Resource}} object and has a 
> single logger, so clients should use its helper methods to create log 
> records. There is a helper method called {{shouldLogReservationActivity}} 
> that checks if DEBUG or TRACE level is activated on the logger. 
>  See the javadoc on this class for further information.
>  
> {{ResourceAssignment}} is also responsible for adding the app / appettempt ID 
> to every log record (with some exceptions).
>  A couple of check classes are introduced: They are responsible to run and 
> store results of checks that are dependency of a successful container 
> allocation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9075) Dynamically add or remove auxiliary services

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712016#comment-16712016
 ] 

Hadoop QA commented on YARN-9075:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 15m 38s{color} 
| {color:red} root generated 1 new + 1490 unchanged - 0 fixed = 1491 total (was 
1490) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 44s{color} | {color:orange} root: The patch generated 56 new + 583 unchanged 
- 47 fixed = 639 total (was 630) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
47s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
25s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 54s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
29s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
28s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-mapreduce-client-shuffle in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
51s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 

[jira] [Updated] (YARN-9088) Non-exclusive labels break QueueMetrics

2018-12-06 Thread Brandon Scheller (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Scheller updated YARN-9088:
---
Description: 
QueueMetrics are broken (random/negative values) when non-exclusive labels are 
being used and unlabeled containers run on labeled nodes.

This is caused by the change in the patch here:

https://issues.apache.org/jira/browse/YARN-6467

It assumes that a container's label will be the same as the node's label that 
it is running on.

If you look within the patch, sometimes metrics are updated using the 
request.getNodeLabelExpression(). And sometimes they are updated using 
node.getPartition().

This means that in the case where the node is labeled while the container 
request isn't, these metrics only get updated when referring to the default 
queue. This stops metrics from balancing out and results in incorrect and 
negative values in QueueMetrics. 

  was:
QueueMetrics are broken (random/negative values) when non-exclusive labels are 
being used and unlabeled containers run on labeled nodes.

This is caused by the change in the patch here:

https://issues.apache.org/jira/browse/YARN-6467

It assumes that a container's label will be the same as the node's label that 
it is running on.

If you look within the patch, sometimes metrics are updated using the 
request.getNodeLabelExpression(). And sometimes they are updated using 
node.getPartition().

This means that in the case where the node is labeled while the request isn't, 
these metrics only get updated when referring to the default queue. This stops 
metrics from balancing out and results in incorrect and negative values in 
QueueMetrics. 


> Non-exclusive labels break QueueMetrics
> ---
>
> Key: YARN-9088
> URL: https://issues.apache.org/jira/browse/YARN-9088
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.8.5
>Reporter: Brandon Scheller
>Priority: Major
>  Labels: metrics, nodelabel
>
> QueueMetrics are broken (random/negative values) when non-exclusive labels 
> are being used and unlabeled containers run on labeled nodes.
> This is caused by the change in the patch here:
> https://issues.apache.org/jira/browse/YARN-6467
> It assumes that a container's label will be the same as the node's label that 
> it is running on.
> If you look within the patch, sometimes metrics are updated using the 
> request.getNodeLabelExpression(). And sometimes they are updated using 
> node.getPartition().
> This means that in the case where the node is labeled while the container 
> request isn't, these metrics only get updated when referring to the default 
> queue. This stops metrics from balancing out and results in incorrect and 
> negative values in QueueMetrics. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9089) Add Terminal Link to Service component instance page for UI2

2018-12-06 Thread Eric Yang (JIRA)
Eric Yang created YARN-9089:
---

 Summary: Add Terminal Link to Service component instance page for 
UI2
 Key: YARN-9089
 URL: https://issues.apache.org/jira/browse/YARN-9089
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: yarn-ui-v2
Reporter: Eric Yang


In UI2, Service > Component > Component Instance uses Timeline server to 
aggregate information about Service component instance.  Timeline server does 
not have the full information like the port number of the node manager, or the 
web protocol used by the node manager.  It requires some changes to aggregate 
node manager information into Timeline server in order to compute the Terminal 
link.  For reducing the scope of YARN-8914, it is better file this as a 
separate task.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9085) Guaranteed and MaxCapacity CSQueueMetrics

2018-12-06 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712055#comment-16712055
 ] 

Jonathan Hung commented on YARN-9085:
-

Thanks folks. Upon reviewing your comments I feel this logic is better handled 
in updateClusterResource (the original patch would update 
guaranteed/max-capacity every time a container is allocated which is 
excessive). Uploaded 002 to address this.

{{[~zhz], setGuaranteedResources}} takes a partition to match the behavior of 
other metrics, so once multiple partition metrics is supported this code all be 
changed at once.

[~erwaman], non-default partition metrics should be addressed in YARN-6492 (but 
not yet committed). GPU metrics should be addressable in YARN-8842 (but this is 
currently only in 3.3.0)

For the other comments, I took this logic out of 
CSQueueUtils#updateQueueStatistics and put it in ParentQueue/LeafQueue, if we 
support multi-partition metrics we can update this part to loop through all of 
that queue's partitions (but for now we can just pass in default partition 
directly).

> Guaranteed and MaxCapacity CSQueueMetrics
> -
>
> Key: YARN-9085
> URL: https://issues.apache.org/jira/browse/YARN-9085
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.3
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9085.001.patch, YARN-9085.002.patch
>
>
> Would be useful to have Absolute Capacity/Absolute Max Capacity for queues to 
> compare against allocated/pending/etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9087) Better logging for initialization of Resource plugins

2018-12-06 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712108#comment-16712108
 ] 

Haibo Chen commented on YARN-9087:
--

Thanks [~snemeth] for the patch. A few nits about the patch.

1) Not sure why the logging in ContainerScheduler is removed. I think we should 
keep it.  Container Scheduler would try to bootstrap cgroups if cgroups has not 
been initialized elsewhere.

2) All the toString() methods include the class name. We can used 
XXX.class.getName() instead in case the name changes.

3) IMO, only immutable fields that are initialized in the constructor should be 
included in the toString(). It may be confusing/misleading if the value changes 
later after toString() is called.

> Better logging for initialization of Resource plugins
> -
>
> Key: YARN-9087
> URL: https://issues.apache.org/jira/browse/YARN-9087
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9087.001.patch
>
>
> The patch includes the following enahncements for logging: 
> - Logging initializer code of resource handlers in 
> {{LinuxContainerExecutor#init}}
> - Logging initializer code of resource plugins in 
> {{ResourcePluginManager#initialize}}
> - Added toString to {{ResourceHandlerChain}}
> - Added toString to all implementations to subclasses of {{ResourcePlugin}} 
> as they are printed in {{ResourcePluginManager#initialize}}
> - Added toString to all implementations to subclasses of {{ResourceHandler}} 
> as they are printed as a field of the {{LinuxContainerExecutor#init}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8914) Add xtermjs to YARN UI2

2018-12-06 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712110#comment-16712110
 ] 

Eric Yang commented on YARN-8914:
-

While developing for Service > Components > Component Instance view, I realize 
the ajax call to timeline server to compose the view is entirely different code 
path than application attempt logic via Resource Manager REST API.  In order to 
get node manager port, and http protocol used by node manager.  This need more 
changes to timeline server to aggregate the information then service view can 
display the same information as application attempt page.  It is better to open 
the Service > Components > Component Instance view as a separate task to reduce 
the scope of this JIRA.  Since patch 9 serializer change is not necessary, 
[~billie.rinaldi] can you review patch 8 to see if this is good for commit?  
Thanks

> Add xtermjs to YARN UI2
> ---
>
> Key: YARN-8914
> URL: https://issues.apache.org/jira/browse/YARN-8914
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-8914.001.patch, YARN-8914.002.patch, 
> YARN-8914.003.patch, YARN-8914.004.patch, YARN-8914.005.patch, 
> YARN-8914.006.patch, YARN-8914.007.patch, YARN-8914.008.patch, 
> YARN-8914.009.patch
>
>
> In the container listing from UI2, we can add a link to connect to docker 
> container using xtermjs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9085) Guaranteed and MaxCapacity CSQueueMetrics

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712163#comment-16712163
 ] 

Hadoop QA commented on YARN-9085:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 29 new + 297 unchanged - 0 fixed = 326 total (was 297) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 11s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}149m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9085 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950894/YARN-9085.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 17e5cc7c8ef0 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8d882c3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22793/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| whitespace | 

[jira] [Commented] (YARN-9008) Extend YARN distributed shell with file localization feature

2018-12-06 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712143#comment-16712143
 ] 

Haibo Chen commented on YARN-9008:
--

Thanks [~pbacsko] for the patch!  A few minor comments

1) We are missing one unit test for upload a non-existent file and one for a 
directory.

2) The new commandline option 'appname' should probably be renamed to 
'app_name' for the sake of consistency with other options

3) All IOExceptions are wrapped in a RunTimeException. But I am not sure why 
benefits it provides than just directly throwing IOException.

4) I notice 2.9.1 is included in the affect version. Do you intend to backport 
this into branch-2? If so, we shall not use stream api that is only supported 
in Java 8.

5)  The relative path of a file is composed of the app_name, appId and the file 
name. We have two copies of the same code in both ApplicationMaster and Client. 
If only one copy is changed in the future, the feature would fail. Can we 
centralize them in one place?

6) 'localized_files' sounds very much into the implementation details. 
MapReduce jobs client can add lib files at submission time, which are under the 
hood uploaded to HDFS and localized for access. We have almost the same idea 
here. What do you think of renaming it to 'lib'?

> Extend YARN distributed shell with file localization feature
> 
>
> Key: YARN-9008
> URL: https://issues.apache.org/jira/browse/YARN-9008
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9008-001.patch, YARN-9008-002.patch, 
> YARN-9008-003.patch, YARN-9008-004.patch
>
>
> YARN distributed shell is a very handy tool to test various features of YARN.
> However, it lacks support for file localization - that is, you define files 
> in the command line that you wish to be localized remotely. This can be 
> extremely useful in certain scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9038) [CSI] Add ability to publish/unpublish volumes on node managers

2018-12-06 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-9038:
--
Labels: CSI  (was: )

> [CSI] Add ability to publish/unpublish volumes on node managers
> ---
>
> Key: YARN-9038
> URL: https://issues.apache.org/jira/browse/YARN-9038
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
>  Labels: CSI
> Attachments: YARN-9038.001.patch
>
>
> We need to add ability to publish volumes on node managers in staging area, 
> under NM's local dir. And then mount the path to docker container to make it 
> visible in the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8873) [YARN-8811] Add CSI java-based client library

2018-12-06 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8873:
--
Labels: CSI  (was: )

> [YARN-8811] Add CSI java-based client library
> -
>
> Key: YARN-8873
> URL: https://issues.apache.org/jira/browse/YARN-8873
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
>  Labels: CSI
> Fix For: 3.3.0
>
> Attachments: YARN-8873.001.patch, YARN-8873.002.patch, 
> YARN-8873.003.patch, YARN-8873.004.patch, YARN-8873.005.patch, 
> YARN-8873.006.patch
>
>
> Build a java-based client to talk to CSI drivers, through CSI gRPC services.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8811) Support Container Storage Interface (CSI) in YARN

2018-12-06 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8811:
--
Labels: CSI  (was: )

> Support Container Storage Interface (CSI) in YARN
> -
>
> Key: YARN-8811
> URL: https://issues.apache.org/jira/browse/YARN-8811
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
>  Labels: CSI
> Attachments: Support Container Storage Interface(CSI) in YARN_design 
> doc_20180921.pdf, Support Container Storage Interface(CSI) in YARN_design 
> doc_20180928.pdf, Support Container Storage Interface(CSI) in 
> YARN_design_doc_v3.pdf
>
>
> The Container Storage Interface (CSI) is a vendor neutral interface to bridge 
> Container Orchestrators and Storage Providers. With the adoption of CSI in 
> YARN, it will be easier to integrate 3rd party storage systems, and provide 
> the ability to attach persistent volumes for stateful applications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9038) [CSI] Add ability to publish/unpublish volumes on node managers

2018-12-06 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-9038:
--
Summary: [CSI] Add ability to publish/unpublish volumes on node managers  
(was: Add ability to publish/unpublish volumes on node managers)

> [CSI] Add ability to publish/unpublish volumes on node managers
> ---
>
> Key: YARN-9038
> URL: https://issues.apache.org/jira/browse/YARN-9038
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
>  Labels: CSI
> Attachments: YARN-9038.001.patch
>
>
> We need to add ability to publish volumes on node managers in staging area, 
> under NM's local dir. And then mount the path to docker container to make it 
> visible in the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9058) [CSI] YARN service fail to launch due to CSI changes

2018-12-06 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-9058:
--
Labels: CSI  (was: )

> [CSI] YARN service fail to launch due to CSI changes
> 
>
> Key: YARN-9058
> URL: https://issues.apache.org/jira/browse/YARN-9058
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Weiwei Yang
>Priority: Blocker
>  Labels: CSI
> Fix For: 3.3.0
>
> Attachments: YARN-9058.001.patch
>
>
> YARN service AM fails to launch with error message:
> {code}
> 2018-11-26 19:32:33,486 [main] INFO  service.AbstractService - Service Client 
> AM Service failed in state STARTED
> java.lang.ClassCastException: 
> org.apache.hadoop.yarn.proto.ClientAMProtocol$ClientAMProtocolService$2 
> cannot be cast to csi.com.google.protobuf.BlockingService
> at 
> org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl.getServer(RpcServerFactoryPBImpl.java:132)
> at 
> org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC.getServer(HadoopYarnProtoRPC.java:65)
> at org.apache.hadoop.yarn.ipc.YarnRPC.getServer(YarnRPC.java:54)
> at 
> org.apache.hadoop.yarn.service.ClientAMService.serviceStart(ClientAMService.java:88)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
> at 
> org.apache.hadoop.yarn.service.ServiceMaster.lambda$serviceStart$0(ServiceMaster.java:267)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at 
> org.apache.hadoop.yarn.service.ServiceMaster.serviceStart(ServiceMaster.java:265)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.yarn.service.ServiceMaster.main(ServiceMaster.java:346)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8953) [CSI] CSI driver adaptor module support in NodeManager

2018-12-06 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8953:
--
Labels: CSI  (was: )

> [CSI] CSI driver adaptor module support in NodeManager
> --
>
> Key: YARN-8953
> URL: https://issues.apache.org/jira/browse/YARN-8953
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
>  Labels: CSI
> Fix For: 3.3.0
>
> Attachments: YARN-8953.001.patch, YARN-8953.002.patch, 
> YARN-8953.003.patch, YARN-8953.004.patch, YARN-8953.005.patch, 
> YARN-8953.006.patch, csi_adaptor_workflow.png
>
>
> CSI adaptor is a layer between YARN and CSI driver, it transforms YARN 
> internal concepts and boxes them according to CSI protocol. Then forward the 
> call to a CSI driver. The adaptor should support both 
> controller/node/identity services.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9037) [CSI] Ignore volume resource in resource calculators based on tags

2018-12-06 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-9037:
--
Labels: CSI  (was: )

> [CSI] Ignore volume resource in resource calculators based on tags
> --
>
> Key: YARN-9037
> URL: https://issues.apache.org/jira/browse/YARN-9037
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Sunil Govindan
>Priority: Major
>  Labels: CSI
> Attachments: YARN-9037-002.patch, YARN-9037.001.patch
>
>
> The pre-provisioned volume is specified as a resource, but such resource is 
> different comparing to what is managed now in YARN, e.g memory, vcores. They 
> are constrained by 3rd party storage systems, so it looks more like an 
> unmanaged resource. In such case, we need to ignore the resource calculation 
> over them in the resource calculators. This can be done based on the resource 
> tags.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8877) [CSI] Extend service spec to allow setting resource attributes

2018-12-06 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8877:
--
Labels: CSI  (was: )

> [CSI] Extend service spec to allow setting resource attributes
> --
>
> Key: YARN-8877
> URL: https://issues.apache.org/jira/browse/YARN-8877
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
>  Labels: CSI
> Fix For: 3.3.0
>
> Attachments: YARN-8877.001.patch, YARN-8877.002.patch
>
>
> Extend yarn native service spec to support setting resource attributes in the 
> spec file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8940) [CSI] Add volume as a top-level attribute in service spec

2018-12-06 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8940:
--
Labels: CSI  (was: )

> [CSI] Add volume as a top-level attribute in service spec 
> --
>
> Key: YARN-8940
> URL: https://issues.apache.org/jira/browse/YARN-8940
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
>  Labels: CSI
>
> Initial thought:
> {noformat}
> {
>   "name": "volume example",
>   "version": "1.0.0",
>   "description": "a volume simple example",
>   "components" :
> [
>   {
> "name": "",
> "number_of_containers": 1,
> "artifact": {
>   "id": "docker.io/centos:latest",
>   "type": "DOCKER"
> },
> "launch_command": "sleep,120",
> "configuration": {
>   "env": {
> "YARN_CONTAINER_RUNTIME_DOCKER_RUN_OVERRIDE_DISABLE":"true"
>   }
> },
> "resource": {
>   "cpus": 1,
>   "memory": "256",
> },
> "volumes": [
>   {
> "volume" : {
>   "type": "s3_csi",
>   "id": "5504d4a8-b246-11e8-94c2-026b17aa1190",
>   "capability" : {
> "min": "5Gi",
> "max": "100Gi"
>   },
>   "source_path": "s3://my_bucket/my", # optional for object stores
>   "mount_path": "/mnt/data", # required, the mount point in 
> docker container
>   "access_mode": "SINGLE_READ", # how the volume can be accessed
> }
>   }
> ]
>   }
> }
>   ]
> }
> {noformat}
> Open for discussion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9057) CSI jar file should not bundle third party dependencies

2018-12-06 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-9057:
--
Labels: CSI  (was: )

> CSI jar file should not bundle third party dependencies
> ---
>
> Key: YARN-9057
> URL: https://issues.apache.org/jira/browse/YARN-9057
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Weiwei Yang
>Priority: Blocker
>  Labels: CSI
> Fix For: 3.3.0
>
> Attachments: YARN-9057.001.patch, YARN-9057.002.patch
>
>
> hadoop-yarn-csi-3.3.0-SNAPSHOT.jar bundles all third party classes like a 
> shaded jar instead of CSI only classes.  This is generating error messages 
> for YARN cli:
> {code}
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/local/hadoop-3.3.0-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/local/hadoop-3.3.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-csi-3.3.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9086) [CSI] Run csi-driver-adaptor as aux service

2018-12-06 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-9086:
--
Labels: CSI  (was: )

> [CSI] Run csi-driver-adaptor as aux service
> ---
>
> Key: YARN-9086
> URL: https://issues.apache.org/jira/browse/YARN-9086
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
>  Labels: CSI
>
> Since the csi-driver-adaptor's runtime depends on protobuf3, we need to run 
> it with a seperate class loader. Aux service provides such ability, this 
> ticket is tracking the effort to run the adaptors as NM's aux services.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8902) [CSI] Add volume manager that manages CSI volume lifecycle

2018-12-06 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8902:
--
Labels: CSI  (was: )

> [CSI] Add volume manager that manages CSI volume lifecycle
> --
>
> Key: YARN-8902
> URL: https://issues.apache.org/jira/browse/YARN-8902
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
>  Labels: CSI
> Fix For: 3.3.0
>
> Attachments: YARN-8902.001.patch, YARN-8902.002.patch, 
> YARN-8902.003.patch, YARN-8902.004.patch, YARN-8902.005.patch, 
> YARN-8902.006.patch, YARN-8902.007.patch, YARN-8902.008.patch, 
> YARN-8902.009.patch
>
>
> The CSI volume manager is a service running in RM process, that manages all 
> CSI volumes' lifecycle. The details about volume's lifecycle states can be 
> found in [CSI 
> spec|https://github.com/container-storage-interface/spec/blob/master/spec.md].
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8822) Nvidia-docker v2 support

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711601#comment-16711601
 ] 

Hadoop QA commented on YARN-8822:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  4m 
40s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 21s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 7 new + 217 unchanged - 0 fixed = 224 total (was 217) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}128m 21s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m  
0s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 

[jira] [Updated] (YARN-9089) Add Terminal Link to Service component instance page for UI2

2018-12-06 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-9089:

Attachment: YARN-9089.001.patch

> Add Terminal Link to Service component instance page for UI2
> 
>
> Key: YARN-9089
> URL: https://issues.apache.org/jira/browse/YARN-9089
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Eric Yang
>Priority: Major
> Attachments: YARN-9089.001.patch
>
>
> In UI2, Service > Component > Component Instance uses Timeline server to 
> aggregate information about Service component instance.  Timeline server does 
> not have the full information like the port number of the node manager, or 
> the web protocol used by the node manager.  It requires some changes to 
> aggregate node manager information into Timeline server in order to compute 
> the Terminal link.  For reducing the scope of YARN-8914, it is better file 
> this as a separate task.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9089) Add Terminal Link to Service component instance page for UI2

2018-12-06 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712191#comment-16712191
 ] 

Eric Yang commented on YARN-9089:
-

I was able to find a shorter workaround without refactoring data that goes into 
Timeline server to obtain node manager port.  Patch 001 works with YARN-8914 
patch 008.

> Add Terminal Link to Service component instance page for UI2
> 
>
> Key: YARN-9089
> URL: https://issues.apache.org/jira/browse/YARN-9089
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Eric Yang
>Priority: Major
> Attachments: YARN-9089.001.patch
>
>
> In UI2, Service > Component > Component Instance uses Timeline server to 
> aggregate information about Service component instance.  Timeline server does 
> not have the full information like the port number of the node manager, or 
> the web protocol used by the node manager.  It requires some changes to 
> aggregate node manager information into Timeline server in order to compute 
> the Terminal link.  For reducing the scope of YARN-8914, it is better file 
> this as a separate task.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9089) Add Terminal Link to Service component instance page for UI2

2018-12-06 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reassigned YARN-9089:
---

Assignee: Eric Yang

> Add Terminal Link to Service component instance page for UI2
> 
>
> Key: YARN-9089
> URL: https://issues.apache.org/jira/browse/YARN-9089
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9089.001.patch
>
>
> In UI2, Service > Component > Component Instance uses Timeline server to 
> aggregate information about Service component instance.  Timeline server does 
> not have the full information like the port number of the node manager, or 
> the web protocol used by the node manager.  It requires some changes to 
> aggregate node manager information into Timeline server in order to compute 
> the Terminal link.  For reducing the scope of YARN-8914, it is better file 
> this as a separate task.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8822) Nvidia-docker v2 support

2018-12-06 Thread Charo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712247#comment-16712247
 ] 

Charo Zhang commented on YARN-8822:
---

[~leftnoteasy] According to your suggestions, i updated 003 patch:
1, use add_param_to_command_if_allowed  method to check 
2, add some detail in documentation about new configs.
3, refactor test case about set_runtime.
besides, i will modify checkstyle errors in Jenkins output at the later patch.

> Nvidia-docker v2 support
> 
>
> Key: YARN-8822
> URL: https://issues.apache.org/jira/browse/YARN-8822
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.1
>Reporter: Zhankun Tang
>Assignee: Charo Zhang
>Priority: Critical
>  Labels: Docker
> Attachments: YARN-8822-branch-3.1.1.001.patch, YARN-8822.001.patch, 
> YARN-8822.002.patch, YARN-8822.003.patch
>
>
> To run a GPU container with Docker, we have nvdia-docker v1 support already 
> but is deprecated per 
> [here|https://github.com/NVIDIA/nvidia-docker/wiki/About-version-2.0]. We 
> should support nvdia-docker v2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8822) Nvidia-docker v2 support

2018-12-06 Thread Charo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charo Zhang updated YARN-8822:
--
Attachment: YARN-8822.004.patch

> Nvidia-docker v2 support
> 
>
> Key: YARN-8822
> URL: https://issues.apache.org/jira/browse/YARN-8822
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.1
>Reporter: Zhankun Tang
>Assignee: Charo Zhang
>Priority: Critical
>  Labels: Docker
> Attachments: YARN-8822-branch-3.1.1.001.patch, YARN-8822.001.patch, 
> YARN-8822.002.patch, YARN-8822.003.patch, YARN-8822.004.patch
>
>
> To run a GPU container with Docker, we have nvdia-docker v1 support already 
> but is deprecated per 
> [here|https://github.com/NVIDIA/nvidia-docker/wiki/About-version-2.0]. We 
> should support nvdia-docker v2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9051) Integrate multiple CustomResourceTypesConfigurationProvider implementations into one

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712229#comment-16712229
 ] 

Hadoop QA commented on YARN-9051:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 27s{color} | {color:orange} root: The patch generated 9 new + 281 unchanged 
- 2 fixed = 290 total (was 283) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
27s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
15s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 28s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
30s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}120m 53s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
49s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}365m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9051 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950878/YARN-9051.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  

[jira] [Comment Edited] (YARN-8822) Nvidia-docker v2 support

2018-12-06 Thread Charo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712247#comment-16712247
 ] 

Charo Zhang edited comment on YARN-8822 at 12/7/18 2:09 AM:


[~leftnoteasy] According to your suggestions, i updated 003 patch:
1, use add_param_to_command_if_allowed  method to check 
2, add some detail in documentation about new configs.
3, refactor test case about set_runtime.
besides, i will modify checkstyle errors in Jenkins output at the later 004 
patch.


was (Author: charo zhang):
[~leftnoteasy] According to your suggestions, i updated 003 patch:
1, use add_param_to_command_if_allowed  method to check 
2, add some detail in documentation about new configs.
3, refactor test case about set_runtime.
besides, i will modify checkstyle errors in Jenkins output at the later patch.

> Nvidia-docker v2 support
> 
>
> Key: YARN-8822
> URL: https://issues.apache.org/jira/browse/YARN-8822
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.1
>Reporter: Zhankun Tang
>Assignee: Charo Zhang
>Priority: Critical
>  Labels: Docker
> Attachments: YARN-8822-branch-3.1.1.001.patch, YARN-8822.001.patch, 
> YARN-8822.002.patch, YARN-8822.003.patch, YARN-8822.004.patch
>
>
> To run a GPU container with Docker, we have nvdia-docker v1 support already 
> but is deprecated per 
> [here|https://github.com/NVIDIA/nvidia-docker/wiki/About-version-2.0]. We 
> should support nvdia-docker v2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9022) MiniYarnCluster 3.1.0 RESTAPI not working for some cases

2018-12-06 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth reassigned YARN-9022:


Assignee: Szilard Nemeth

> MiniYarnCluster 3.1.0 RESTAPI not working for some cases
> 
>
> Key: YARN-9022
> URL: https://issues.apache.org/jira/browse/YARN-9022
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.1.0
>Reporter: liehuo chen
>Assignee: Szilard Nemeth
>Priority: Major
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Actually I am not sure Should I open this Jira in Hadoop or in Spark projects.
> Try spark 2.4 rc5 with hadoop 3.1.0, it failed 4 tests in test suite: 
> org.apache.spark.deploy.yarn.YarnClusterSuite,  the reason is those tests are 
> trying to access logs from UI like:  
> [http://$RM_ADDRESS:49363/node/containerlogs/$container_id/user/stdout?start=-4096,|http://192.168.0.30:49363/node/containerlogs/container_1542175195899_0001_02_02/user/stdout?start=-4096,]
>  failed on following msg: 
> {code:java}
> // code placeholder
> {code}
> java.lang.AbstractMethodError: 
> javax.ws.rs.core.UriBuilder.uri(Ljava/lang/String;)Ljavax/ws/rs/core/UriBuilder;
>  at javax.ws.rs.core.UriBuilder.fromUri(UriBuilder.java:119) at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:911)
>  at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:875)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebAppFilter.doFilter(NMWebAppFilter.java:73)
>  at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:829)
>  at 
> com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:119)
>  at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:133) at 
> com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:130) at 
> com.google.inject.servlet.GuiceFilter$Context.call(GuiceFilter.java:203) at 
> com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:130) at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>  at 
> org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)
>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>  at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:110)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>  at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1601)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>  at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) 
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) 
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) 
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) 
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>  at org.eclipse.jetty.server.Server.handle(Server.java:539) at 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333) at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) 
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>  at 
> 

[jira] [Commented] (YARN-8822) Nvidia-docker v2 support

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712390#comment-16712390
 ] 

Hadoop QA commented on YARN-8822:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}138m 57s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
39s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}255m 40s{color} | 
{color:black} 

[jira] [Commented] (YARN-8914) Add xtermjs to YARN UI2

2018-12-06 Thread Akhil PB (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712361#comment-16712361
 ] 

Akhil PB commented on YARN-8914:


[~eyang]  Please use the UI2 changes from patch 9. I have made it a bit 
cleaner, by removing unnecessary codes. I have removed the code from models and 
serializers. Since the requestedUser is passed from the router to the view, 
models won't have userInfo data.

> Add xtermjs to YARN UI2
> ---
>
> Key: YARN-8914
> URL: https://issues.apache.org/jira/browse/YARN-8914
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-8914.001.patch, YARN-8914.002.patch, 
> YARN-8914.003.patch, YARN-8914.004.patch, YARN-8914.005.patch, 
> YARN-8914.006.patch, YARN-8914.007.patch, YARN-8914.008.patch, 
> YARN-8914.009.patch
>
>
> In the container listing from UI2, we can add a link to connect to docker 
> container using xtermjs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8714) [Submarine] Support files/tarballs to be localized for a training job.

2018-12-06 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8714:
---
Attachment: YARN-8714-trunk.006.patch

> [Submarine] Support files/tarballs to be localized for a training job.
> --
>
> Key: YARN-8714
> URL: https://issues.apache.org/jira/browse/YARN-8714
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8714-WIP1-trunk-001.patch, 
> YARN-8714-WIP1-trunk-002.patch, YARN-8714-trunk.001.patch, 
> YARN-8714-trunk.002.patch, YARN-8714-trunk.003.patch, 
> YARN-8714-trunk.004.patch, YARN-8714-trunk.005.patch, 
> YARN-8714-trunk.006.patch
>
>
> See 
> [https://docs.google.com/document/d/199J4pB3blqgV9SCNvBbTqkEoQdjoyGMjESV4MktCo0k/edit#heading=h.vkxp9edl11m7],
>  {{job run --localization ...}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9081) Update jackson from 1.9.13 to 2.x in hadoop-yarn-services-core

2018-12-06 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned YARN-9081:
---

Assignee: Akira Ajisaka

> Update jackson from 1.9.13 to 2.x in hadoop-yarn-services-core
> --
>
> Key: YARN-9081
> URL: https://issues.apache.org/jira/browse/YARN-9081
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: newbie
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8714) [Submarine] Support files/tarballs to be localized for a training job.

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712411#comment-16712411
 ] 

Hadoop QA commented on YARN-8714:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine: 
The patch generated 3 new + 43 unchanged - 15 fixed = 46 total (was 58) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-submarine in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-8714 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950942/YARN-8714-trunk.006.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 50cb24542b4a 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6c852f2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22795/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/22795/artifact/out/whitespace-eol.txt
 |
|  Test Results | 

[jira] [Created] (YARN-9087) Better logging for initialization of Resource plugins

2018-12-06 Thread Szilard Nemeth (JIRA)
Szilard Nemeth created YARN-9087:


 Summary: Better logging for initialization of Resource plugins
 Key: YARN-9087
 URL: https://issues.apache.org/jira/browse/YARN-9087
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: yarn
Reporter: Szilard Nemeth
Assignee: Szilard Nemeth


The patch includes the following enahncements for logging: 
- Logging initializer code of resource handlers in 
{{LinuxContainerExecutor#init}}
- Logging initializer code of resource plugins in 
{{ResourcePluginManager#initialize}}
- Added toString to {{ResourceHandlerChain}}
- Added toString to all implementations to subclasses of {{ResourcePlugin}} as 
they are printed in {{ResourcePluginManager#initialize}}
- Added toString to all implementations to subclasses of {{ResourceHandler}} as 
they are printed as a field of the {{LinuxContainerExecutor#init}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8714) [Submarine] Support files/tarballs to be localized for a training job.

2018-12-06 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8714:
---
Attachment: YARN-8714-trunk.005.patch

> [Submarine] Support files/tarballs to be localized for a training job.
> --
>
> Key: YARN-8714
> URL: https://issues.apache.org/jira/browse/YARN-8714
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8714-WIP1-trunk-001.patch, 
> YARN-8714-WIP1-trunk-002.patch, YARN-8714-trunk.001.patch, 
> YARN-8714-trunk.002.patch, YARN-8714-trunk.003.patch, 
> YARN-8714-trunk.004.patch, YARN-8714-trunk.005.patch
>
>
> See 
> [https://docs.google.com/document/d/199J4pB3blqgV9SCNvBbTqkEoQdjoyGMjESV4MktCo0k/edit#heading=h.vkxp9edl11m7],
>  {{job run --localization ...}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8714) [Submarine] Support files/tarballs to be localized for a training job.

2018-12-06 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-8714:
---
Attachment: YARN-8714-trunk.007.patch

> [Submarine] Support files/tarballs to be localized for a training job.
> --
>
> Key: YARN-8714
> URL: https://issues.apache.org/jira/browse/YARN-8714
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-8714-WIP1-trunk-001.patch, 
> YARN-8714-WIP1-trunk-002.patch, YARN-8714-trunk.001.patch, 
> YARN-8714-trunk.002.patch, YARN-8714-trunk.003.patch, 
> YARN-8714-trunk.004.patch, YARN-8714-trunk.005.patch, 
> YARN-8714-trunk.006.patch, YARN-8714-trunk.007.patch
>
>
> See 
> [https://docs.google.com/document/d/199J4pB3blqgV9SCNvBbTqkEoQdjoyGMjESV4MktCo0k/edit#heading=h.vkxp9edl11m7],
>  {{job run --localization ...}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9081) Update jackson from 1.9.13 to 2.x in hadoop-yarn-services-core

2018-12-06 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-9081:

Attachment: YARN-9081.01.patch

> Update jackson from 1.9.13 to 2.x in hadoop-yarn-services-core
> --
>
> Key: YARN-9081
> URL: https://issues.apache.org/jira/browse/YARN-9081
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: newbie
> Attachments: YARN-9081.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9075) Dynamically add or remove auxiliary services

2018-12-06 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9075:
-
Attachment: YARN-9075.001.patch

> Dynamically add or remove auxiliary services
> 
>
> Key: YARN-9075
> URL: https://issues.apache.org/jira/browse/YARN-9075
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9075.001.patch, 
> YARN-9075_Dynamic_Aux_Services_V1.pdf
>
>
> It would be useful to support adding, removing, or updating auxiliary 
> services without requiring a restart of NMs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9087) Better logging for initialization of Resource plugins

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711817#comment-16711817
 ] 

Hadoop QA commented on YARN-9087:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 65 unchanged - 2 fixed = 65 total (was 67) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m  
7s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9087 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950867/YARN-9087.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c87bd5657415 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c03024a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22790/testReport/ |
| Max. process+thread count | 295 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 

[jira] [Comment Edited] (YARN-9087) Better logging for initialization of Resource plugins

2018-12-06 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711712#comment-16711712
 ] 

Szilard Nemeth edited comment on YARN-9087 at 12/6/18 5:11 PM:
---

Most probably I will file some more jiras around GPU support if I encounter 
rooms for improvement like this, but only if GPU support v1 won't get 
deprecated soon.
 To decide the extent of my contribution, I'm curious about what is the plan 
with GPU support v2 (YARN-8820) / the new pluggable device plugin framework 
(YARN-8851)
 [~sunilg], [~leftnoteasy]: Could you please give me a rough timeline for 
those? 
 Is it your intention to "deprecate" the code added soon with YARN-6223 or your 
plan is to keep GPU support v1 in place for an extended period of time?

Thanks!


was (Author: snemeth):
Most probably I will file some more jiras around GPU support if I encounter 
rooms for improvement like this, but only if GPU support v1 won't get 
deprecated soon.
 To decide the extent of my contribution, I'm curious about what is the plan 
with GPU support v2 (YARN-8820) / the new pluggable device plugin framework 
(YARN-8851)
 [~sunilg], [~leftnoteasy]: Could you please give me a rough timeline for 
those? 
 Is it your intention to "deprecate" the code added soon with YARN-6223 or your 
plan is to keep GPU support v1 in place for an extended period of time?

> Better logging for initialization of Resource plugins
> -
>
> Key: YARN-9087
> URL: https://issues.apache.org/jira/browse/YARN-9087
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9087.001.patch
>
>
> The patch includes the following enahncements for logging: 
> - Logging initializer code of resource handlers in 
> {{LinuxContainerExecutor#init}}
> - Logging initializer code of resource plugins in 
> {{ResourcePluginManager#initialize}}
> - Added toString to {{ResourceHandlerChain}}
> - Added toString to all implementations to subclasses of {{ResourcePlugin}} 
> as they are printed in {{ResourcePluginManager#initialize}}
> - Added toString to all implementations to subclasses of {{ResourceHandler}} 
> as they are printed as a field of the {{LinuxContainerExecutor#init}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9087) Better logging for initialization of Resource plugins

2018-12-06 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-9087:
-
Attachment: YARN-9087.001.patch

> Better logging for initialization of Resource plugins
> -
>
> Key: YARN-9087
> URL: https://issues.apache.org/jira/browse/YARN-9087
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9087.001.patch
>
>
> The patch includes the following enahncements for logging: 
> - Logging initializer code of resource handlers in 
> {{LinuxContainerExecutor#init}}
> - Logging initializer code of resource plugins in 
> {{ResourcePluginManager#initialize}}
> - Added toString to {{ResourceHandlerChain}}
> - Added toString to all implementations to subclasses of {{ResourcePlugin}} 
> as they are printed in {{ResourcePluginManager#initialize}}
> - Added toString to all implementations to subclasses of {{ResourceHandler}} 
> as they are printed as a field of the {{LinuxContainerExecutor#init}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9085) Guaranteed and MaxCapacity CSQueueMetrics

2018-12-06 Thread Anthony Hsu (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711709#comment-16711709
 ] 

Anthony Hsu commented on YARN-9085:
---

Looks good overall. A few questions/comments:
 * In CSQueueMetrics.java, I see the comment

{code:java}
//Metrics updated only for "default" partition{code}
How come metrics are not updated for non-default partitions? Are any metrics 
available for non-default partitions?
 * What about GPU metrics? Can CSQueueMetrics collect those, too?
 * Regarding
{noformat}
 if (nodePartition == null) {
   for (String partition : 
Sets.union(queueCapacities.getNodePartitionsSet(),
   queueResourceUsage.getNodePartitionsSet())) {
-updateUsedCapacity(rc, nlm.getResourceByLabel(partition, cluster),
-partition, childQueue);
+updateUsedCapacity(rc, partitionResource, partition, childQueue);
   }
+  updateConfiguredCapacities(rc, partitionResource, childQueue);
 } else {
-  updateUsedCapacity(rc, nlm.getResourceByLabel(nodePartition, cluster),
-  nodePartition, childQueue);
+  updateUsedCapacity(rc, partitionResource, nodePartition, childQueue);
 }
{noformat}
Seems to me the *updateConfiguredCapacities* call you added should be inside 
the for loop and should also take in a *partition* parameter like the 
*updateUsedCapacity* call does. In the future, metrics may be collected for 
non-default partitions as well.
Also, I think the *else* block should also have a *updateConfiguredCapacities* 
call (in case in the future, we collect non-default partition metrics, too.

> Guaranteed and MaxCapacity CSQueueMetrics
> -
>
> Key: YARN-9085
> URL: https://issues.apache.org/jira/browse/YARN-9085
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.3
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9085.001.patch
>
>
> Would be useful to have Absolute Capacity/Absolute Max Capacity for queues to 
> compare against allocated/pending/etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8714) [Submarine] Support files/tarballs to be localized for a training job.

2018-12-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711711#comment-16711711
 ] 

Hadoop QA commented on YARN-8714:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine: 
The patch generated 3 new + 41 unchanged - 15 fixed = 44 total (was 56) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-submarine in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-8714 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950860/YARN-8714-trunk.005.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 561aace89153 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c03024a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22789/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22789/testReport/ |
| Max. process+thread count | 413 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine 
U: 

[jira] [Commented] (YARN-9087) Better logging for initialization of Resource plugins

2018-12-06 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711712#comment-16711712
 ] 

Szilard Nemeth commented on YARN-9087:
--

Most probably I will file some more jiras around GPU support if I encounter 
rooms for improvement like this, but only if GPU support v1 won't get 
deprecated soon.
 To decide the extent of my contribution, I'm curious about what is the plan 
with GPU support v2 (YARN-8820) / the new pluggable device plugin framework 
(YARN-8851)
 [~sunilg], [~leftnoteasy]: Could you please give me a rough timeline for 
those? 
 Is it your intention to "deprecate" the code added soon with YARN-6223 or your 
plan is to keep GPU support v1 in place for an extended period of time?

> Better logging for initialization of Resource plugins
> -
>
> Key: YARN-9087
> URL: https://issues.apache.org/jira/browse/YARN-9087
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9087.001.patch
>
>
> The patch includes the following enahncements for logging: 
> - Logging initializer code of resource handlers in 
> {{LinuxContainerExecutor#init}}
> - Logging initializer code of resource plugins in 
> {{ResourcePluginManager#initialize}}
> - Added toString to {{ResourceHandlerChain}}
> - Added toString to all implementations to subclasses of {{ResourcePlugin}} 
> as they are printed in {{ResourcePluginManager#initialize}}
> - Added toString to all implementations to subclasses of {{ResourceHandler}} 
> as they are printed as a field of the {{LinuxContainerExecutor#init}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9022) MiniYarnCluster 3.1.0 RESTAPI not working for some cases

2018-12-06 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-9022:
-
Description: 
Actually I am not sure Should I open this Jira in Hadoop or in Spark projects.

Try spark 2.4 rc5 with hadoop 3.1.0, it failed 4 tests in test suite: 
org.apache.spark.deploy.yarn.YarnClusterSuite,  the reason is those tests are 
trying to access logs from UI like:  
[http://$RM_ADDRESS:49363/node/containerlogs/$container_id/user/stdout?start=-4096,|http://192.168.0.30:49363/node/containerlogs/container_1542175195899_0001_02_02/user/stdout?start=-4096,]
 failed on following msg: 
{code:java}

java.lang.AbstractMethodError: 
javax.ws.rs.core.UriBuilder.uri(Ljava/lang/String;)Ljavax/ws/rs/core/UriBuilder;
 at javax.ws.rs.core.UriBuilder.fromUri(UriBuilder.java:119) at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:911)
 at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:875)
 at 
org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebAppFilter.doFilter(NMWebAppFilter.java:73)
 at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:829)
 at 
com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:119)
 at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:133) at 
com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:130) at 
com.google.inject.servlet.GuiceFilter$Context.call(GuiceFilter.java:203) at 
com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:130) at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
 at 
org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
 at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)
 at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
 at 
org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:110)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
 at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1601)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
 at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
 at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) 
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) 
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) 
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
 at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) 
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) 
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) 
at org.eclipse.jetty.server.Server.handle(Server.java:539) at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333) at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
 at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) 
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) 
at java.lang.Thread.run(Thread.java:748) 
{code}

  was:
Actually I am not sure Should I open this Jira in Hadoop or in Spark projects.

Try spark 2.4 rc5 with hadoop 3.1.0, it failed 4 tests in test suite: 
org.apache.spark.deploy.yarn.YarnClusterSuite,  the reason is those tests are 

[jira] [Commented] (YARN-9087) Better logging for initialization of Resource plugins

2018-12-06 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711837#comment-16711837
 ] 

Wangda Tan commented on YARN-9087:
--

[~snemeth], Device plugin framework is for the future plugins. We won't 
"deprecate" existing GPU implementation. But I can expect once device plugin 
framework becomes ready, we can do refactoring to make better code structure 
and more maintainable.

The nvidia-docker-plugin has two versions, v1 and v2. As of now, we're using v1 
nvidia-docker-plugin, which is deprecated by nvidia. There's a patch to support 
v2 plugin. (YARN-8822). Same to existing GPU implementation, once device plugin 
framework becomes ready, we will refactor the code to use that if efforts are 
reasonable.

> Better logging for initialization of Resource plugins
> -
>
> Key: YARN-9087
> URL: https://issues.apache.org/jira/browse/YARN-9087
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9087.001.patch
>
>
> The patch includes the following enahncements for logging: 
> - Logging initializer code of resource handlers in 
> {{LinuxContainerExecutor#init}}
> - Logging initializer code of resource plugins in 
> {{ResourcePluginManager#initialize}}
> - Added toString to {{ResourceHandlerChain}}
> - Added toString to all implementations to subclasses of {{ResourcePlugin}} 
> as they are printed in {{ResourcePluginManager#initialize}}
> - Added toString to all implementations to subclasses of {{ResourceHandler}} 
> as they are printed as a field of the {{LinuxContainerExecutor#init}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org