[jira] [Updated] (YARN-4220) [Storage implementation] Support getEntities with only Application id but no userId

2016-08-25 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-4220:

Summary: [Storage implementation] Support getEntities with only Application 
id but no userId  (was: [Storage implementation] Support getEntities with only 
Application id but no flow and flow run ID)

> [Storage implementation] Support getEntities with only Application id but no 
> userId
> ---
>
> Key: YARN-4220
> URL: https://issues.apache.org/jira/browse/YARN-4220
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: YARN-5355
>
> Currently we're enforcing flow and flowrun id to be non-null values on 
> {{getEntities}}. We can actually query the appToFlow table to figure out an 
> application's flow id and flowrun id if they're missing. This will simplify 
> normal queries. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4220) [Storage implementation] Support getEntities with only Application id but no flow and flow run ID

2016-08-25 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15438467#comment-15438467
 ] 

Rohith Sharma K S commented on YARN-4220:
-

In hadoop alpha-1, querying for system entities such as YARN_APPLICATION and 
YARN_FLOW_RUN throws NPE exception.
GET 
http://localhost:8188/ws/v2/timeline/apps/application_1471931266232_0024/entities
{noformat}
Caused by: java.lang.NullPointerException: userId shouldn't be null
at 
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
at 
org.apache.hadoop.yarn.server.timelineservice.storage.reader.ApplicationEntityReader.validateParams(ApplicationEntityReader.java:336)
at 
org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.readEntities(TimelineEntityReader.java:249)
at 
org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl.getEntities(HBaseTimelineReaderImpl.java:86)
at 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderManager.getEntities(TimelineReaderManager.java:141)
at 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getEntities(TimelineReaderWebServices.java:562
{noformat}

{noformat}
Caused by: java.lang.NullPointerException: userId shouldn't be null
at 
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
at 
org.apache.hadoop.yarn.server.timelineservice.storage.reader.FlowRunEntityReader.validateParams(FlowRunEntityReader.java:89)
at 
org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.readEntities(TimelineEntityReader.java:249)
at 
org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl.getEntities(HBaseTimelineReaderImpl.java:86)
at 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderManager.getEntities(TimelineReaderManager.java:141)
at 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getEntities(TimelineReaderWebServices.java:562)
{noformat}

> [Storage implementation] Support getEntities with only Application id but no 
> flow and flow run ID
> -
>
> Key: YARN-4220
> URL: https://issues.apache.org/jira/browse/YARN-4220
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: YARN-5355
>
> Currently we're enforcing flow and flowrun id to be non-null values on 
> {{getEntities}}. We can actually query the appToFlow table to figure out an 
> application's flow id and flowrun id if they're missing. This will simplify 
> normal queries. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5562) [Atsv2] system entities retrieval in REST throws NPE

2016-08-25 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S resolved YARN-5562.
-
Resolution: Duplicate

In weekly meeting with ATSv2 folks, we discussed this scenario and found that  
YARN-4220 intended to solve the same case. I will be closing this JIRA and put 
recent exception trace in YARN-4220.

> [Atsv2] system entities retrieval in REST throws NPE
> 
>
> Key: YARN-5562
> URL: https://issues.apache.org/jira/browse/YARN-5562
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>
> It is seen that when evene try to retrieval system entities throw NPE.  URL 
> /ws/v2/timeline/apps/{app-id}/entities 
> # for YARN_APPLICATION and YARN_FLOW_RUN
> {noformat}
> {
> "exception": "WebApplicationException",
> "message": "java.lang.NullPointerException: userId shouldn't be null",
> "javaClassName": "javax.ws.rs.WebApplicationException"
> }
> {noformat}
> May be these entities can be skipped if not intended to use via REST apis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3671) Integrate Federation services with ResourceManager

2016-08-25 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15438463#comment-15438463
 ] 

Jian He commented on YARN-3671:
---

bq. RM_CLUSTER_ID is currently used for HA but Federation can work both with 
and without HA 
RM_CLUSTER_ID is not used in non-HA, because it's not needed.  It can also be 
used without HA.  My take is that it seems two configures for the same purpose 
to identify a cluster, or I missed certain use-case ?

> Integrate Federation services with ResourceManager
> --
>
> Key: YARN-3671
> URL: https://issues.apache.org/jira/browse/YARN-3671
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-3671-YARN-2915-v1.patch, 
> YARN-3671-YARN-2915-v2.patch
>
>
> This JIRA proposes adding the ability to turn on Federation services like 
> StateStore, cluster membership heartbeat etc in the RM



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-08-25 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15438450#comment-15438450
 ] 

Bibin A Chundatt commented on YARN-5545:


[~leftnoteasy] Could you please share your thoughts.

> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:286)
>   at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:296)
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:301)
>   ... 25 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5564) Fix typo in RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE

2016-08-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15438420#comment-15438420
 ] 

Hudson commented on YARN-5564:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10351 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10351/])
YARN-5564. Fix typo in (naganarasimha_gr: rev 
27c3b86252386c9c064a6420b3c650644cbb9ef3)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairSchedulerPreemption.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerTestBase.java


> Fix typo in RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE
> -
>
> Key: YARN-5564
> URL: https://issues.apache.org/jira/browse/YARN-5564
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Trivial
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5564.001.patch
>
>
> The variable 
> RM_SCHEDULER_RESERVATION_THRESHOLD_INCERMENT_MULTIPLE
> has a typo in the "INCREMENT" part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5564) Fix typo in RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE

2016-08-25 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-5564:

Fix Version/s: 3.0.0-alpha2

> Fix typo in RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE
> -
>
> Key: YARN-5564
> URL: https://issues.apache.org/jira/browse/YARN-5564
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Trivial
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5564.001.patch
>
>
> The variable 
> RM_SCHEDULER_RESERVATION_THRESHOLD_INCERMENT_MULTIPLE
> has a typo in the "INCREMENT" part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5564) Fix typo in RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE

2016-08-25 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15438403#comment-15438403
 ] 

Naganarasimha G R edited comment on YARN-5564 at 8/26/16 3:27 AM:
--

Thanks [~rchiang] for the contribution and [~yufeigu] for the review, committed 
it trunk and branch-2


was (Author: naganarasimha):
Thanks [~rchiang] for the contribution and @y

> Fix typo in RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE
> -
>
> Key: YARN-5564
> URL: https://issues.apache.org/jira/browse/YARN-5564
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Trivial
> Attachments: YARN-5564.001.patch
>
>
> The variable 
> RM_SCHEDULER_RESERVATION_THRESHOLD_INCERMENT_MULTIPLE
> has a typo in the "INCREMENT" part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5564) Fix typo in RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE

2016-08-25 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15438403#comment-15438403
 ] 

Naganarasimha G R commented on YARN-5564:
-

Thanks [~rchiang] for the contribution and @y

> Fix typo in RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE
> -
>
> Key: YARN-5564
> URL: https://issues.apache.org/jira/browse/YARN-5564
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Trivial
> Attachments: YARN-5564.001.patch
>
>
> The variable 
> RM_SCHEDULER_RESERVATION_THRESHOLD_INCERMENT_MULTIPLE
> has a typo in the "INCREMENT" part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5564) Fix typo in RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE

2016-08-25 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-5564:

Summary: Fix typo in RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE  
(was: Fix typo in .RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE)

> Fix typo in RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE
> -
>
> Key: YARN-5564
> URL: https://issues.apache.org/jira/browse/YARN-5564
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Trivial
> Attachments: YARN-5564.001.patch
>
>
> The variable 
> RM_SCHEDULER_RESERVATION_THRESHOLD_INCERMENT_MULTIPLE
> has a typo in the "INCREMENT" part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1503) Support making additional 'LocalResources' available to running containers

2016-08-25 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15438392#comment-15438392
 ] 

Jian He commented on YARN-1503:
---

It is differentiated. To clarify, here, I'm referring to the re-localization 
process, not the normal localization. For normal container localization, we can 
keep the behavior the same as today. For re-localization, i.e. localize the 
resources while the container is running, the container should not fail if the 
localization process fails. The AM just gets notification that the localization 
failed, AM itself choose to ignore/retry or fail the task depending on the 
use-case.

> Support making additional 'LocalResources' available to running containers
> --
>
> Key: YARN-1503
> URL: https://issues.apache.org/jira/browse/YARN-1503
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Jian He
> Attachments: Continuous-resource-localization.pdf
>
>
> We have a use case, where additional resources (jars, libraries etc) need to 
> be made available to an already running container. Ideally, we'd like this to 
> be done via YARN (instead of having potentially multiple containers per node 
> download resources on their own).
> Proposal:
>   NM to support an additional API where a list of resources can be specified. 
> Something like "localiceResource(ContainerId, Map)
>   NM would also require an additional API to get state for these resources - 
> "getLocalizationState(ContainerId)" - which returns the current state of all 
> local resources for the specified container(s).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5564) Fix typo in .RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE

2016-08-25 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15438387#comment-15438387
 ] 

Naganarasimha G R commented on YARN-5564:
-

Thanks [~rchiang], Simple fix committing this in !

> Fix typo in .RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE
> --
>
> Key: YARN-5564
> URL: https://issues.apache.org/jira/browse/YARN-5564
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Trivial
> Attachments: YARN-5564.001.patch
>
>
> The variable 
> RM_SCHEDULER_RESERVATION_THRESHOLD_INCERMENT_MULTIPLE
> has a typo in the "INCREMENT" part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5557) Add localize API to the ContainerManagementProtocol

2016-08-25 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15438379#comment-15438379
 ] 

Jian He commented on YARN-5557:
---

bq. I think a resource string should be unique within the same NM here.
yes, it is unique. That's the assumption how container currently use it. 

> Add localize API to the ContainerManagementProtocol
> ---
>
> Key: YARN-5557
> URL: https://issues.apache.org/jira/browse/YARN-5557
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5557.1.patch, YARN-5557.2.patch, YARN-5557.3.patch, 
> YARN-5557.4.patch
>
>
> A new localize API for localizing new resources. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-5557) Add localize API to the ContainerManagementProtocol

2016-08-25 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5557:
--
Comment: was deleted

(was: bq. I think a resource string should be unique within the same NM here.
yes, it is unique. That's the assumption how container currently use it. )

> Add localize API to the ContainerManagementProtocol
> ---
>
> Key: YARN-5557
> URL: https://issues.apache.org/jira/browse/YARN-5557
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5557.1.patch, YARN-5557.2.patch, YARN-5557.3.patch, 
> YARN-5557.4.patch
>
>
> A new localize API for localizing new resources. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3940) Application moveToQueue should check NodeLabel permission

2016-08-25 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15438377#comment-15438377
 ] 

Naganarasimha G R commented on YARN-3940:
-

[~sunilg] & [~rohithsharma], if no other comments on the latest patch, i plan 
to go ahead and commit the patch. 

> Application moveToQueue should check NodeLabel permission 
> --
>
> Key: YARN-3940
> URL: https://issues.apache.org/jira/browse/YARN-3940
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-3940.patch, 0002-YARN-3940.patch, 
> 0003-YARN-3940.patch, 0004-YARN-3940.patch, 0005-YARN-3940.patch, 
> 0006-YARN-3940.patch, YARN-3940.0007.patch, YARN-3940.0008.patch, 
> YARN-3940.0009.patch
>
>
> Configure capacity scheduler 
> Configure node label an submit application {{queue=A Label=X}}
> Move application to queue {{B}} and x is not having access
> {code}
> 2015-07-20 19:46:19,626 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
>  Application attempt appattempt_1437385548409_0005_01 released container 
> container_e08_1437385548409_0005_01_02 on node: host: 
> host-10-19-92-117:64318 #containers=1 available= 
> used= with event: KILL
> 2015-07-20 19:46:20,970 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: 
> Invalid resource ask by application appattempt_1437385548409_0005_01
> org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid 
> resource request, queue=b1 doesn't have permission to access all labels in 
> resource request. labelExpression of resource request=x. Queue labels=y
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:304)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:234)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndvalidateRequest(SchedulerUtils.java:250)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.normalizeAndValidateRequests(RMServerUtils.java:106)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:515)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
> at 
> org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2168)
> {code}
> Same exception will be thrown till *heartbeat timeout*
> Then application state will be updated to *FAILED*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5557) Add localize API to the ContainerManagementProtocol

2016-08-25 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15438380#comment-15438380
 ] 

Jian He commented on YARN-5557:
---

bq. I think a resource string should be unique within the same NM here.
yes, it is unique. That's the assumption how container currently use it. 

> Add localize API to the ContainerManagementProtocol
> ---
>
> Key: YARN-5557
> URL: https://issues.apache.org/jira/browse/YARN-5557
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5557.1.patch, YARN-5557.2.patch, YARN-5557.3.patch, 
> YARN-5557.4.patch
>
>
> A new localize API for localizing new resources. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-08-25 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15438214#comment-15438214
 ] 

Arun Suresh commented on YARN-4597:
---

Based on discussions with [~kasha], [~kkaranasos], [~subru]..
I propose the following:
# Rename *SCHEDULE* (as proposed by this JIRA) to *QUEUED*
# Move the management of Queued container from {{QueuingContainerManagerImp}} 
into the Local Scheduler.


> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5564) Fix typo in .RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE

2016-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15438181#comment-15438181
 ] 

Hadoop QA commented on YARN-5564:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 37m 32s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 49s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825550/YARN-5564.001.patch |
| JIRA Issue | YARN-5564 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 03ab5d116f68 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 81485db |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12902/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12902/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Fix typo in .RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE
> --
>
> Key: YARN-5564
> URL: https://issues.apache.org/jira/browse/YARN-5564
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Trivial
> Attachments: YARN-5564.001.patch
>
>
> The variable 
> 

[jira] [Commented] (YARN-5564) Fix typo in .RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE

2016-08-25 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15438169#comment-15438169
 ] 

Yufei Gu commented on YARN-5564:


LGTM (non-binding)

> Fix typo in .RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE
> --
>
> Key: YARN-5564
> URL: https://issues.apache.org/jira/browse/YARN-5564
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Trivial
> Attachments: YARN-5564.001.patch
>
>
> The variable 
> RM_SCHEDULER_RESERVATION_THRESHOLD_INCERMENT_MULTIPLE
> has a typo in the "INCREMENT" part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5565) Capacity Scheduler not assigning value correctly.

2016-08-25 Thread gurmukh singh (JIRA)
gurmukh singh created YARN-5565:
---

 Summary: Capacity Scheduler not assigning value correctly.
 Key: YARN-5565
 URL: https://issues.apache.org/jira/browse/YARN-5565
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacity scheduler
Affects Versions: 2.7.2
 Environment: Centos 6.7
Reporter: gurmukh singh


Hi

I was testing and found out that value assigned in the scheduler configuration 
is not consistent with what ResourceManager is assigning.

If i set the configuration as below and understand that it is java float, but 
the rounding is not correct.

capacity-sheduler.xml

  yarn.scheduler.capacity.q1.capacity
  7.142857142857143


In Java:  System.err.println((7.142857142857143f)) ===> 7.142587 

But, instead Resource Manager is assigning is 7.1428566

Tested this on hadoop 2.7.2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Rajesh Balamohan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15438057#comment-15438057
 ] 

Rajesh Balamohan edited comment on YARN-5551 at 8/25/16 11:37 PM:
--

This patch worked for the scenario we ran into. 

If memory mapping of a file is anon=0, should that cause the process to be 
killed?

A more generic patch would be figure out whether memory mapping with annon=0 
should be deciding factor for killing the process.


was (Author: rajesh.balamohan):
This patch worked for the scenario we ran into. 

If memory mapping of a file is anon=0, should that cause the process to be 
killed. 

A more generic patch would be figure out whether memory mapping with annon=0 
should be deciding factor for killing the process.

> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7fbf2800-7fbf6800 rw-s  08:02 11927571   
> /tmp/7298569189125604642/arena-1291157252088664681.cache (deleted)
> Size:1048576 kB
> Rss:   17288 kB
> Pss:   17288 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean:   232 kB
> Private_Dirty: 17056 kB
> Referenced:17288 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5554) MoveApplicationAcrossQueues does not check user permission on the target queue

2016-08-25 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15438130#comment-15438130
 ] 

Haibo Chen commented on YARN-5554:
--

Sorry for messing up the history. I will keep it in mind.

> MoveApplicationAcrossQueues does not check user permission on the target queue
> --
>
> Key: YARN-5554
> URL: https://issues.apache.org/jira/browse/YARN-5554
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Haibo Chen
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-5554.2.patch, YARN-5554.3.patch
>
>
> moveApplicationAcrossQueues operation currently does not check user 
> permission on the target queue. This incorrectly allows one user to move 
> his/her own applications to a queue that the user has no access to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3671) Integrate Federation services with ResourceManager

2016-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15438129#comment-15438129
 ] 

Hadoop QA commented on YARN-3671:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
42s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 19s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
41s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
37s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 16s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 40s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 3 
new + 288 unchanged - 1 fixed = 291 total (was 289) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 31s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 29s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 28s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825543/YARN-3671-YARN-2915-v2.patch
 |
| JIRA Issue | YARN-3671 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 956ad14fb536 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 256034d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12901/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12901/testReport/ |
| modules | C: 

[jira] [Commented] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Rajesh Balamohan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15438057#comment-15438057
 ] 

Rajesh Balamohan commented on YARN-5551:


This patch worked for the scenario we ran into. 

If memory mapping of a file is anon=0, should that cause the process to be 
killed. 

A more generic patch would be figure out whether memory mapping with annon=0 
should be deciding factor for killing the process.

> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7fbf2800-7fbf6800 rw-s  08:02 11927571   
> /tmp/7298569189125604642/arena-1291157252088664681.cache (deleted)
> Size:1048576 kB
> Rss:   17288 kB
> Pss:   17288 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean:   232 kB
> Private_Dirty: 17056 kB
> Referenced:17288 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4945) [Umbrella] Capacity Scheduler Preemption Within a queue

2016-08-25 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15438052#comment-15438052
 ] 

Wangda Tan commented on YARN-4945:
--

Thanks [~eepayne] for providing the detail use case requirements.

I haven't looked at contents in PDF yet, but for the overall requirements make 
sense, and they look like the most important use cases for intra-queue 
preemptions to me.

With existing framework added by [~sunilg], we should be able to support 
different scheduling policies (like fair, fifo, priority, etc.) by adding 
different preemptable-resource-calculator (decide ideal/preemptable resource 
for apps), and different preemptable-candidate-selector (decide containers to 
preempt).

> [Umbrella] Capacity Scheduler Preemption Within a queue
> ---
>
> Key: YARN-4945
> URL: https://issues.apache.org/jira/browse/YARN-4945
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
> Attachments: Intra-Queue Preemption Use Cases.pdf, 
> IntraQueuepreemption-CapacityScheduler (Design).pdf, YARN-2009-wip.patch
>
>
> This is umbrella ticket to track efforts of preemption within a queue to 
> support features like:
> YARN-2009. YARN-2113. YARN-4781.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5387) FairScheduler: add the ability to specify a parent queue to all placement rules

2016-08-25 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-5387:
-
Labels: supportability  (was: )

> FairScheduler: add the ability to specify a parent queue to all placement 
> rules
> ---
>
> Key: YARN-5387
> URL: https://issues.apache.org/jira/browse/YARN-5387
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>  Labels: supportability
>
> In the current placement policy there all rules generate a queue name under 
> the root. The only exception is the nestedUserQueue rule. This rule allows a 
> queue to be created under a parent queue defined by a second rule.
> Instead of creating new rules to also allow nested groups, secondary groups 
> or  nested queues for new rules that we think of we should generalise this by 
> allowing a parent attribute to be specified in each rule like the create flag.
> The optional parent attribute for a rule should allow the following values:
> - empty (which is the same as not specifying the attribute)
> - a rule
> - a fixed value (with or without the root prefix)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5563) Add log messages for jobs in ACCEPTED state but not runnable.

2016-08-25 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-5563:
-
Labels: supportability  (was: )

> Add log messages for jobs in ACCEPTED state but not runnable.
> -
>
> Key: YARN-5563
> URL: https://issues.apache.org/jira/browse/YARN-5563
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>  Labels: supportability
>
> Leaf queues maintain a list of runnable and non-runnable apps. FairScheduler 
> marks an app non-runnable for different reasons: exceeding (1) queue max 
> apps, (2) user max apps, (3) queue maxResources, (4) maxAMShare. It would be 
> nice to log the reason an app isn't runnable. The first three are easy to 
> infer, but the last one (maxAMShare) is particularly hard. It would be nice 
> to log at least that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5563) Add log messages for jobs in ACCEPTED state but not runnable.

2016-08-25 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-5563:
-
Issue Type: Sub-task  (was: Improvement)
Parent: YARN-5397

> Add log messages for jobs in ACCEPTED state but not runnable.
> -
>
> Key: YARN-5563
> URL: https://issues.apache.org/jira/browse/YARN-5563
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> Leaf queues maintain a list of runnable and non-runnable apps. FairScheduler 
> marks an app non-runnable for different reasons: exceeding (1) queue max 
> apps, (2) user max apps, (3) queue maxResources, (4) maxAMShare. It would be 
> nice to log the reason an app isn't runnable. The first three are easy to 
> infer, but the last one (maxAMShare) is particularly hard. It would be nice 
> to log at least that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5564) Fix typo in .RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE

2016-08-25 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-5564:
-
Attachment: YARN-5564.001.patch

> Fix typo in .RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE
> --
>
> Key: YARN-5564
> URL: https://issues.apache.org/jira/browse/YARN-5564
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Trivial
> Attachments: YARN-5564.001.patch
>
>
> The variable 
> RM_SCHEDULER_RESERVATION_THRESHOLD_INCERMENT_MULTIPLE
> has a typo in the "INCREMENT" part.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5564) Fix typo in .RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE

2016-08-25 Thread Ray Chiang (JIRA)
Ray Chiang created YARN-5564:


 Summary: Fix typo in 
.RM_SCHEDULER_RESERVATION_THRESHOLD_INCREMENT_MULTIPLE
 Key: YARN-5564
 URL: https://issues.apache.org/jira/browse/YARN-5564
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: fairscheduler
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Trivial


The variable 

RM_SCHEDULER_RESERVATION_THRESHOLD_INCERMENT_MULTIPLE

has a typo in the "INCREMENT" part.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5557) Add localize API to the ContainerManagementProtocol

2016-08-25 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437864#comment-15437864
 ] 

Junping Du commented on YARN-5557:
--

Thanks for comments, Arun.
bq. Shouldnt we mark the new methods as @Evolving / @Unstable 
I think all new added APIs are marked as Unstable. Which method you are talking 
about here?

bq. shouldn't the response contain some sort of ID so the requester can track 
it?
I think a resource string should be unique within the same NM here. Even 
re-localization doesn't change this. Jian, can you confirm this?

> Add localize API to the ContainerManagementProtocol
> ---
>
> Key: YARN-5557
> URL: https://issues.apache.org/jira/browse/YARN-5557
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5557.1.patch, YARN-5557.2.patch, YARN-5557.3.patch, 
> YARN-5557.4.patch
>
>
> A new localize API for localizing new resources. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437858#comment-15437858
 ] 

Gopal V commented on YARN-5551:
---

bq. The more I think about this, the more I feel ignoring deleted files is the 
wrong thing to do

Yes, deleted files is a red-herring (happens to be how we secure the files away 
from other users).

I think the original problem of YARN killing a process needs to be fixed (the 
original SMAPS fix was for HDFS Zero Copy read via mmap).

{code}
  total +=
  Math.min(info.sharedDirty, info.pss) + info.privateDirty
  + info.privateClean;
{code}

If as [~nroberts] suggests, If YARN counted only  the "anonymous" pages as the 
"will be free'd a kill" memory, it would give me a better way.

bq. the write() case is going to eventually be throttled by the OS because it 
will only allow so many dirty buffer cache pages in the system. I don't believe 
that's the case for the mmap'd file.

Once you exceed the dirty_ratio, the only way you can avoid a page-fault is by 
modifying an existing dirty page over & over again.

If I understand page-writeback.c correctly, the blocking operation would be the 
page fault on a memory block which is missing in memory.

bq. that significant memory use needs to be associated with that process in the 
accounting.

Accounting isn't the problem, killing processes is the problem.

> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7fbf2800-7fbf6800 rw-s  08:02 11927571   
> /tmp/7298569189125604642/arena-1291157252088664681.cache (deleted)
> Size:1048576 kB
> Rss:   17288 kB
> Pss:   17288 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean:   232 kB
> Private_Dirty: 17056 kB
> Referenced:17288 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4911) Bad placement policy in FairScheduler causes the RM to crash

2016-08-25 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437846#comment-15437846
 ] 

Daniel Templeton commented on YARN-4911:


Thanks for the patch [~rchiang].  Couple of comments.

First, can you please add a test to {{TestFairScheduler}} to test the behavior 
you just changed?

Second, if you'll forgive the nit-picking, let's talk about your error message. 
:)

bq. Unable to match app  to a queue placement policy.  Check with an 
administrator to make sure submitting to a valid queue and/or check that the 
queue placement policies have the create property set to true.

I think there's a word or two missing between "sure" and "submitting."  I'd 
also like to be a little more specific, like:

bq. Unable to match app  to a queue placement policy, and no valid 
terminal queue placement rule is configured.  Please contact an administrator 
to confirm that the fair scheduler configuration contains a valid terminal 
queue placement rule.

I'd also log that same thing, or maybe something with a bit more technical 
detail, as an ERROR or WARN.

> Bad placement policy in FairScheduler causes the RM to crash
> 
>
> Key: YARN-4911
> URL: https://issues.apache.org/jira/browse/YARN-4911
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: supportability
> Attachments: YARN-4911.001.patch, YARN-4911.002.patch
>
>
> When you have a fair-scheduler.xml with the rule:
>   
> 
>   
> and the queue okay1 doesn't exist, the following exception occurs in the RM:
> 2016-04-01 16:56:33,383 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type APP_ADDED to the scheduler
> java.lang.IllegalStateException: Should have applied a rule before reaching 
> here
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueuePlacementPolicy.assignAppToQueue(QueuePlacementPolicy.java:173)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.assignToQueue(FairScheduler.java:728)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.addApplication(FairScheduler.java:634)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1224)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:112)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:691)
> at java.lang.Thread.run(Thread.java:745)
> which causes the RM to crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3671) Integrate Federation services with ResourceManager

2016-08-25 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-3671:
-
Attachment: YARN-3671-YARN-2915-v2.patch

Thanks [~jianhe] for the feedback. Updated patch (v2) to remove redundant null 
check and refactor _setStateStoreClient_ as suggested by you.

As to your other questions:

bq. we already have RM_CLUSTER_ID, any chance that this can be used for 
FEDERATION_SUBCLUSTER_ID ?

That's a possibility. The reason I didn't combine both is RM_CLUSTER_ID is 
currently used for HA but Federation can work both with and without HA (and RM 
HA can work both with and without Federation). So felt it would be better to 
keep them separate. Thoughts?

bq. I feel the SubClusterState is a bit redundant in the request object, 
because the API itself already indicates the state such as register / 
deregister.

You are right. We don't want state to be null in the store so either the store 
impl can implicitly add SC_NEW/SC_UNREGISTERED on register / deregister or the 
invoker (which is always RM) can. I decided to do it in the RM for 2 reasons:
  1. It is trivial (one line) & needs to be done in a single place (RM) instead 
of in each store impl we add.
   2. This allows for flexibility future as RM could potentially register / 
deregister with different states (say SC_DRAINING).

Makes sense?

> Integrate Federation services with ResourceManager
> --
>
> Key: YARN-3671
> URL: https://issues.apache.org/jira/browse/YARN-3671
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-3671-YARN-2915-v1.patch, 
> YARN-3671-YARN-2915-v2.patch
>
>
> This JIRA proposes adding the ability to turn on Federation services like 
> StateStore, cluster membership heartbeat etc in the RM



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5557) Add localize API to the ContainerManagementProtocol

2016-08-25 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437814#comment-15437814
 ] 

Arun Suresh commented on YARN-5557:
---

Thanks for the patch [~jianhe]..
Shouldnt we mark the new methods as @Evolving / @Unstable ?
Also, shouldn't the response contain some sort of ID so the requester can track 
it ?

> Add localize API to the ContainerManagementProtocol
> ---
>
> Key: YARN-5557
> URL: https://issues.apache.org/jira/browse/YARN-5557
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5557.1.patch, YARN-5557.2.patch, YARN-5557.3.patch, 
> YARN-5557.4.patch
>
>
> A new localize API for localizing new resources. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5549) AMLauncher.createAMContainerLaunchContext() should not log the command to be launched indiscriminately

2016-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437802#comment-15437802
 ] 

Hadoop QA commented on YARN-5549:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 15s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 37m 31s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 17s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825525/YARN-5549.002.patch |
| JIRA Issue | YARN-5549 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux ad2bb92e1d68 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1360bd2 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 

[jira] [Updated] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-5551:
--
Target Version/s: 2.9.0  (was: 2.7.3)

2.7.3 is released and 2.8.0 is close to being done. Moving target-version to 
2.9.0.

> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7fbf2800-7fbf6800 rw-s  08:02 11927571   
> /tmp/7298569189125604642/arena-1291157252088664681.cache (deleted)
> Size:1048576 kB
> Rss:   17288 kB
> Pss:   17288 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean:   232 kB
> Private_Dirty: 17056 kB
> Referenced:17288 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5549) AMLauncher.createAMContainerLaunchContext() should not log the command to be launched indiscriminately

2016-08-25 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-5549:
---
Affects Version/s: 2.7.2
 Target Version/s: 2.8.0

> AMLauncher.createAMContainerLaunchContext() should not log the command to be 
> launched indiscriminately
> --
>
> Key: YARN-5549
> URL: https://issues.apache.org/jira/browse/YARN-5549
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5549.001.patch, YARN-5549.002.patch
>
>
> The command could contain sensitive information, such as keystore passwords 
> or AWS credentials or other.  Instead of logging it as INFO, we should log it 
> as DEBUG and include a property to disable logging it at all.  Logging it to 
> a different logger would also be viable and may create a smaller 
> administrative footprint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5550) TestYarnCLI#testGetContainers should format according to CONTAINER_PATTERN

2016-08-25 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated YARN-5550:

Assignee: Jonathan Hung

> TestYarnCLI#testGetContainers should format according to CONTAINER_PATTERN
> --
>
> Key: YARN-5550
> URL: https://issues.apache.org/jira/browse/YARN-5550
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.4
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Minor
> Attachments: YARN-5550.001.patch, YARN-5550.002.patch
>
>
> TestYarnCLI#testGetContainers hard codes expected output of getting list of 
> containers via Yarn CLI. If the timestamp is shorter than the number of 
> expected characters in ApplicationCLI#CONTAINER_PATTERN (which is 20), the 
> assert will fail due to whitespace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4945) [Umbrella] Capacity Scheduler Preemption Within a queue

2016-08-25 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-4945:
-
Attachment: Intra-Queue Preemption Use Cases.pdf

[~sunilg] and [~leftnoteasy],

I am attaching the set of use cases that I could think of for in-queue 
preemption. I will include the base use-case statements in this comment, but 
please do look at the document for details, examples, and open issues for each 
use case.


1.  Ensure each user in a queue is guaranteed its appropriate 
minimum-user-limit-percent
1.1.When one (or more) user(s) are below their minimun-user-limit-percent 
and one (or more) user(s) are above their minimum-user-limit-percent, resources 
will be preempted after a configurable time period from the user(s) which are 
above their minimum-user-limit-percent.
1.2.When two (or more) users are below their minimum-user-limit-factor, 
neither will be preempted in favor of the other.
1.3.If all users in a queue are at or over their 
minimum-user-limit-percent, the user-limit-percent-preemption policy will not 
preempt resources.

2.  Ensure priority inversion doesn’t occur between applications.
2.1.When a lower priority app is consuming long-running resources, a higher 
priority app is requesting resources, and the queue cannot grow to accommodate 
the higher priority app’s request, the priority-intra-queue-preemption policy 
will preempt resources from the lower priority app, after a configurable period 
of time.

3.  Interaction between the priority and minimum-user-limit-percent 
preemption policies.
3.1.If priority inversion occurs between apps owned by different users, the 
priority preemption policy will not preempt containers from the lower priority 
app if it would cause the lower priority app to go below the user’s 
minimum-user-limit-percent guarantee.


> [Umbrella] Capacity Scheduler Preemption Within a queue
> ---
>
> Key: YARN-4945
> URL: https://issues.apache.org/jira/browse/YARN-4945
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
> Attachments: Intra-Queue Preemption Use Cases.pdf, 
> IntraQueuepreemption-CapacityScheduler (Design).pdf, YARN-2009-wip.patch
>
>
> This is umbrella ticket to track efforts of preemption within a queue to 
> support features like:
> YARN-2009. YARN-2113. YARN-4781.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437655#comment-15437655
 ] 

Jason Lowe commented on YARN-5551:
--

The more I think about this, the more I feel ignoring deleted files is the 
wrong thing to do.  I think we all can agree that mappings to deleted files can 
still consume memory, and if we skip those mappings then we fail to account for 
that memory.  For purposes of deciding how much memory will be freed when YARN 
kills a process, skipping those sections will make YARN think it can free up 
_less_ memory than it really would.

If we go back to the write() vs. mmap'd file which seems to be the origin 
behind this idea, the write() case is going to eventually be throttled by the 
OS because it will only allow so many dirty buffer cache pages in the system.  
I don't believe that's the case for the mmap'd file.  If we create a process 
that mmap's a large file, deletes it, then spin-loops dirtying the pages, that 
significant memory use needs to be associated with that process in the 
accounting.

> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7fbf2800-7fbf6800 rw-s  08:02 11927571   
> /tmp/7298569189125604642/arena-1291157252088664681.cache (deleted)
> Size:1048576 kB
> Rss:   17288 kB
> Pss:   17288 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean:   232 kB
> Private_Dirty: 17056 kB
> Referenced:17288 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5549) AMLauncher.createAMContainerLaunchContext() should not log the command to be launched indiscriminately

2016-08-25 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-5549:
---
Attachment: YARN-5549.002.patch

Patch to address Jenkins issues.

> AMLauncher.createAMContainerLaunchContext() should not log the command to be 
> launched indiscriminately
> --
>
> Key: YARN-5549
> URL: https://issues.apache.org/jira/browse/YARN-5549
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5549.001.patch, YARN-5549.002.patch
>
>
> The command could contain sensitive information, such as keystore passwords 
> or AWS credentials or other.  Instead of logging it as INFO, we should log it 
> as DEBUG and include a property to disable logging it at all.  Logging it to 
> a different logger would also be viable and may create a smaller 
> administrative footprint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3053) [Security] Review and implement security in ATS v.2

2016-08-25 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437631#comment-15437631
 ] 

Varun Saxena commented on YARN-3053:


An initial draft document enlisting the different possible approaches for 
achieving authentication in ATSv2 has been attached.
Kindly review.

> [Security] Review and implement security in ATS v.2
> ---
>
> Key: YARN-3053
> URL: https://issues.apache.org/jira/browse/YARN-3053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: YARN-5355
> Attachments: ATSv2Authentication(draft).pdf
>
>
> Per design in YARN-2928, we want to evaluate and review the system for 
> security, and ensure proper security in the system.
> This includes proper authentication, token management, access control, and 
> any other relevant security aspects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1503) Support making additional 'LocalResources' available to running containers

2016-08-25 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437626#comment-15437626
 ] 

Arun Suresh commented on YARN-1503:
---

bq. The re-localization process should not tie to container state-machine, 
regardless whether the localization fails or succeed. Container continues to 
run.
Hmmm... then shouldn't we differentiate between Relocalization and Localization 
that is required to start the container ? Or are you proposing that the AM 
calls the new localize API first and then startContainer only after it receives 
a successful response. That way, we can maybe remove the Localization related 
states in the NM Container state machine completely.. but that also means 
existing AMs would need to be modified (or maybe we can just handle it in the 
NMClient)

> Support making additional 'LocalResources' available to running containers
> --
>
> Key: YARN-1503
> URL: https://issues.apache.org/jira/browse/YARN-1503
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Jian He
> Attachments: Continuous-resource-localization.pdf
>
>
> We have a use case, where additional resources (jars, libraries etc) need to 
> be made available to an already running container. Ideally, we'd like this to 
> be done via YARN (instead of having potentially multiple containers per node 
> download resources on their own).
> Proposal:
>   NM to support an additional API where a list of resources can be specified. 
> Something like "localiceResource(ContainerId, Map)
>   NM would also require an additional API to get state for these resources - 
> "getLocalizationState(ContainerId)" - which returns the current state of all 
> local resources for the specified container(s).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5560) Clean up bad exception catching practices in TestYarnClient

2016-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437613#comment-15437613
 ] 

Hadoop QA commented on YARN-5560:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: 
The patch generated 0 new + 64 unchanged - 1 fixed = 64 total (was 65) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 1s 
{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 5s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825519/YARN-5560.v2.patch |
| JIRA Issue | YARN-5560 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b675833cf99f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1360bd2 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12899/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12899/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Clean up bad exception catching practices in TestYarnClient
> ---
>
> Key: YARN-5560
> URL: https://issues.apache.org/jira/browse/YARN-5560
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5560.v1.patch, YARN-5560.v2.patch
>
>
> In TestYarnClient, tests commonly wrap methods that throw exceptions in a try 
> 

[jira] [Updated] (YARN-3053) [Security] Review and implement security in ATS v.2

2016-08-25 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-3053:
---
Attachment: ATSv2Authentication(draft).pdf

> [Security] Review and implement security in ATS v.2
> ---
>
> Key: YARN-3053
> URL: https://issues.apache.org/jira/browse/YARN-3053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: YARN-5355
> Attachments: ATSv2Authentication(draft).pdf
>
>
> Per design in YARN-2928, we want to evaluate and review the system for 
> security, and ensure proper security in the system.
> This includes proper authentication, token management, access control, and 
> any other relevant security aspects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437602#comment-15437602
 ] 

Jason Lowe commented on YARN-5551:
--

Sorry I'm confused, so apologies if this is obvious to everyone else.  Was the 
original data posted to the JIRA not possible in practice?  If it is possible 
then it seems critical to not skip deleted files or risk severely 
under-reporting the memory usage of a process in some cases.  If it's only not 
possible because app-specific cache code was changed then that should not 
influence how YARN does accounting since ideally YARN should not be making 
app-specific assumptions.


> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7fbf2800-7fbf6800 rw-s  08:02 11927571   
> /tmp/7298569189125604642/arena-1291157252088664681.cache (deleted)
> Size:1048576 kB
> Rss:   17288 kB
> Pss:   17288 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean:   232 kB
> Private_Dirty: 17056 kB
> Referenced:17288 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5560) Clean up bad exception catching practices in TestYarnClient

2016-08-25 Thread Sean Po (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Po updated YARN-5560:
--
Attachment: YARN-5560.v2.patch

V2 of the patch fixes the open checkstyle issue.

> Clean up bad exception catching practices in TestYarnClient
> ---
>
> Key: YARN-5560
> URL: https://issues.apache.org/jira/browse/YARN-5560
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5560.v1.patch, YARN-5560.v2.patch
>
>
> In TestYarnClient, tests commonly wrap methods that throw exceptions in a try 
> catch statement similar to the following:
> {code}
> try {
> client.submitApplication(context);
> } catch (Exception e) {
> Assert.fail("Exception is not expected.");
> }
> {code}
> This hides useful error messages, and surfaces less helpful ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437518#comment-15437518
 ] 

Gopal V edited comment on YARN-5551 at 8/25/16 7:44 PM:


[~jlowe],[~nroberts],[~rajesh.balamohan]: I have edited up the JIRA to actually 
show the private_dirty/private_clean (i.e referenced/resident set size is 
non-zero) with a 0 anonymous pages.


was (Author: gopalv):
[~jlowe],[~nroberts],[~rajesh.balamohan]: I have edited up the JIRA to actually 
show the private_dirty/private_clean (i.e resident set size is non-zero) with a 
0 anonymous pages.

> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7fbf2800-7fbf6800 rw-s  08:02 11927571   
> /tmp/7298569189125604642/arena-1291157252088664681.cache (deleted)
> Size:1048576 kB
> Rss:   17288 kB
> Pss:   17288 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean:   232 kB
> Private_Dirty: 17056 kB
> Referenced:17288 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437518#comment-15437518
 ] 

Gopal V commented on YARN-5551:
---

[~jlowe],[~nroberts],[~rajesh.balamohan]: I have edited up the JIRA to actually 
show the private_dirty/private_clean (i.e resident set size is non-zero) with a 
0 anonymous pages.

> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7fbf2800-7fbf6800 rw-s  08:02 11927571   
> /tmp/7298569189125604642/arena-1291157252088664681.cache (deleted)
> Size:1048576 kB
> Rss:   17288 kB
> Pss:   17288 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean:   232 kB
> Private_Dirty: 17056 kB
> Referenced:17288 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated YARN-5551:
--
Description: 
Currently deleted file mappings are also included in the memory computation 
when SMAP is enabled. For e.g

{noformat}
7f612004a000-7f612004c000 rw-s  00:10 4201507513 
/dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185 
(deleted)
Size:  8 kB
Rss:   4 kB
Pss:   2 kB
Shared_Clean:  0 kB
Shared_Dirty:  4 kB
Private_Clean: 0 kB
Private_Dirty: 0 kB
Referenced:4 kB
Anonymous: 0 kB
AnonHugePages: 0 kB
Swap:  0 kB
KernelPageSize:4 kB
MMUPageSize:   4 kB


7fbf2800-7fbf6800 rw-s  08:02 11927571   
/tmp/7298569189125604642/arena-1291157252088664681.cache (deleted)
Size:1048576 kB
Rss:   17288 kB
Pss:   17288 kB
Shared_Clean:  0 kB
Shared_Dirty:  0 kB
Private_Clean:   232 kB
Private_Dirty: 17056 kB
Referenced:17288 kB
Anonymous: 0 kB
AnonHugePages: 0 kB
Swap:  0 kB
KernelPageSize:4 kB
MMUPageSize:   4 kB
{noformat}

It would be good to exclude these from getSmapBasedRssMemorySize() computation. 
 

  was:
Currently deleted file mappings are also included in the memory computation 
when SMAP is enabled. For e.g

{noformat}
7f612004a000-7f612004c000 rw-s  00:10 4201507513 
/dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185 
(deleted)
Size:  8 kB
Rss:   4 kB
Pss:   2 kB
Shared_Clean:  0 kB
Shared_Dirty:  4 kB
Private_Clean: 0 kB
Private_Dirty: 0 kB
Referenced:4 kB
Anonymous: 0 kB
AnonHugePages: 0 kB
Swap:  0 kB
KernelPageSize:4 kB
MMUPageSize:   4 kB


7f6123f99000-7f6163f99000 rw-p  08:41 211419477  
/grid/4/hadoop/yarn/local/usercache/root/appcache/application_1466700718395_1249/container_e19_1466700718395_1249_01_03/7389389356021597290.cache
 (deleted)
Size:1048576 kB
Rss:  637292 kB
Pss:  637292 kB
Shared_Clean:  0 kB
Shared_Dirty:  0 kB
Private_Clean: 0 kB
Private_Dirty:637292 kB
Referenced:   637292 kB
Anonymous:637292 kB
AnonHugePages: 0 kB
Swap:  0 kB
KernelPageSize:4 kB
{noformat}

It would be good to exclude these from getSmapBasedRssMemorySize() computation. 
 


> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7fbf2800-7fbf6800 rw-s  08:02 11927571   
> /tmp/7298569189125604642/arena-1291157252088664681.cache (deleted)
> Size:1048576 kB
> Rss:   17288 kB
> Pss:   17288 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean:   232 kB
> Private_Dirty: 17056 kB
> Referenced:17288 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5549) AMLauncher.createAMContainerLaunchContext() should not log the command to be launched indiscriminately

2016-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437513#comment-15437513
 ] 

Hadoop QA commented on YARN-5549:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 40s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 8s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 17s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 37m 36s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 2s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825509/YARN-5549.001.patch |
| JIRA Issue | YARN-5549 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 14828933a68e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1360bd2 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| whitespace | 

[jira] [Commented] (YARN-5560) Clean up bad exception catching practices in TestYarnClient

2016-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437510#comment-15437510
 ] 

Hadoop QA commented on YARN-5560:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The 
patch generated 1 new + 64 unchanged - 1 fixed = 65 total (was 65) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 44s 
{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 3s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825513/YARN-5560.v1.patch |
| JIRA Issue | YARN-5560 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c571edb34abc 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1360bd2 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12898/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12898/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12898/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Clean up bad exception catching practices in TestYarnClient
> ---
>
> Key: YARN-5560
> URL: https://issues.apache.org/jira/browse/YARN-5560
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Sean Po
>Assignee: Sean Po

[jira] [Updated] (YARN-3053) [Security] Review and implement security in ATS v.2

2016-08-25 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-3053:
---
Assignee: Varun Saxena  (was: Junping Du)

> [Security] Review and implement security in ATS v.2
> ---
>
> Key: YARN-3053
> URL: https://issues.apache.org/jira/browse/YARN-3053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: YARN-5355
>
> Per design in YARN-2928, we want to evaluate and review the system for 
> security, and ensure proper security in the system.
> This includes proper authentication, token management, access control, and 
> any other relevant security aspects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437506#comment-15437506
 ] 

Gopal V commented on YARN-5551:
---

bq. purposes of accounting for how much memory the process is using right now.

The crucial distinction is exactly there. YARN can account memory in two 
different ways - "how much memory is this process using?" vs "how much memory 
can I retrieve by killing this process?".

The 2nd question is what should motivate a process kill (btw, in the non-smaps 
case, the kill is motivated by the first, with no concern for the 2nd).

> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7f6123f99000-7f6163f99000 rw-p  08:41 211419477  
> /grid/4/hadoop/yarn/local/usercache/root/appcache/application_1466700718395_1249/container_e19_1466700718395_1249_01_03/7389389356021597290.cache
>  (deleted)
> Size:1048576 kB
> Rss:  637292 kB
> Pss:  637292 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean: 0 kB
> Private_Dirty:637292 kB
> Referenced:   637292 kB
> Anonymous:637292 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437506#comment-15437506
 ] 

Gopal V edited comment on YARN-5551 at 8/25/16 7:35 PM:


bq. purposes of accounting for how much memory the process is using right now.

The crucial distinction is exactly there. YARN can account memory in two 
different ways - "how much memory is this process using?" vs "how much memory 
can I retrieve by killing this process?" [to run other containers in that 
capacity].

The 2nd question is what should motivate a process kill (btw, in the non-smaps 
case, the kill is motivated by the first, with no concern for the 2nd).


was (Author: gopalv):
bq. purposes of accounting for how much memory the process is using right now.

The crucial distinction is exactly there. YARN can account memory in two 
different ways - "how much memory is this process using?" vs "how much memory 
can I retrieve by killing this process?".

The 2nd question is what should motivate a process kill (btw, in the non-smaps 
case, the kill is motivated by the first, with no concern for the 2nd).

> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7f6123f99000-7f6163f99000 rw-p  08:41 211419477  
> /grid/4/hadoop/yarn/local/usercache/root/appcache/application_1466700718395_1249/container_e19_1466700718395_1249_01_03/7389389356021597290.cache
>  (deleted)
> Size:1048576 kB
> Rss:  637292 kB
> Pss:  637292 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean: 0 kB
> Private_Dirty:637292 kB
> Referenced:   637292 kB
> Anonymous:637292 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3053) [Security] Review and implement security in ATS v.2

2016-08-25 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-3053:
---
Summary: [Security] Review and implement security in ATS v.2  (was: 
[Security] Review and implement for property security in ATS v.2)

> [Security] Review and implement security in ATS v.2
> ---
>
> Key: YARN-3053
> URL: https://issues.apache.org/jira/browse/YARN-3053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Junping Du
>  Labels: YARN-5355
>
> Per design in YARN-2928, we want to evaluate and review the system for 
> security, and ensure proper security in the system.
> This includes proper authentication, token management, access control, and 
> any other relevant security aspects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437497#comment-15437497
 ] 

Gopal V edited comment on YARN-5551 at 8/25/16 7:32 PM:


bq.  the second has the entire dirty region marked as anonymous

[~nroberts]: good catch - the cache pages were supposed to be private_dirty 
only - not anon_dirty. 

Those allocations were supposed to look the same way, let me fix my cache code 
and re-run that on YARN.


was (Author: gopalv):
bq.  the second has the entire dirty region marked as anonymous

[~nroberts]: good catch - the anonymous pages were supposed to be private_dirty 
only - not anon_dirty. 

Those allocations were supposed to look the same way, let me fix my cache code 
and re-run that on YARN.

> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7f6123f99000-7f6163f99000 rw-p  08:41 211419477  
> /grid/4/hadoop/yarn/local/usercache/root/appcache/application_1466700718395_1249/container_e19_1466700718395_1249_01_03/7389389356021597290.cache
>  (deleted)
> Size:1048576 kB
> Rss:  637292 kB
> Pss:  637292 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean: 0 kB
> Private_Dirty:637292 kB
> Referenced:   637292 kB
> Anonymous:637292 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437497#comment-15437497
 ] 

Gopal V commented on YARN-5551:
---

bq.  the second has the entire dirty region marked as anonymous

[~nroberts]: good catch - the anonymous pages were supposed to be private_dirty 
only - not anon_dirty. 

Those allocations were supposed to look the same way, let me fix my cache code 
and re-run that on YARN.

> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7f6123f99000-7f6163f99000 rw-p  08:41 211419477  
> /grid/4/hadoop/yarn/local/usercache/root/appcache/application_1466700718395_1249/container_e19_1466700718395_1249_01_03/7389389356021597290.cache
>  (deleted)
> Size:1048576 kB
> Rss:  637292 kB
> Pss:  637292 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean: 0 kB
> Private_Dirty:637292 kB
> Referenced:   637292 kB
> Anonymous:637292 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5560) Clean up bad exception catching practices in TestYarnClient

2016-08-25 Thread Sean Po (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Po updated YARN-5560:
--
Attachment: YARN-5560.v1.patch

First patch removes occurrences of catch blocks that only invoke Assert.fail.

> Clean up bad exception catching practices in TestYarnClient
> ---
>
> Key: YARN-5560
> URL: https://issues.apache.org/jira/browse/YARN-5560
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5560.v1.patch
>
>
> In TestYarnClient, tests commonly wrap methods that throw exceptions in a try 
> catch statement similar to the following:
> {code}
> try {
> client.submitApplication(context);
> } catch (Exception e) {
> Assert.fail("Exception is not expected.");
> }
> {code}
> This hides useful error messages, and surfaces less helpful ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5560) Clean up bad exception catching practices in TestYarnClient

2016-08-25 Thread Sean Po (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437460#comment-15437460
 ] 

Sean Po edited comment on YARN-5560 at 8/25/16 6:59 PM:


First patch removes occurrences of catch blocks that only invoke Assert.fail in 
TestYarnClient.


was (Author: seanpo03):
First patch removes occurrences of catch blocks that only invoke Assert.fail.

> Clean up bad exception catching practices in TestYarnClient
> ---
>
> Key: YARN-5560
> URL: https://issues.apache.org/jira/browse/YARN-5560
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5560.v1.patch
>
>
> In TestYarnClient, tests commonly wrap methods that throw exceptions in a try 
> catch statement similar to the following:
> {code}
> try {
> client.submitApplication(context);
> } catch (Exception e) {
> Assert.fail("Exception is not expected.");
> }
> {code}
> This hides useful error messages, and surfaces less helpful ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5503) [YARN-3368] Add missing hidden files in webapp folder

2016-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437451#comment-15437451
 ] 

Hadoop QA commented on YARN-5503:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
45s {color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 54s 
{color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 16s 
{color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
59s {color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 42s 
{color} | {color:green} YARN-3368 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 35s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 3m 43s {color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s 
{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 48s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.security.authentication.util.TestZKSignerSecretProvider |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:6068a84 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825476/YARN-5503-YARN-3368.0005.patch
 |
| JIRA Issue | YARN-5503 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux a8f4aee3f21d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3368 / e6afd27 |
| Default Java | 1.8.0_101 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12896/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12896/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12896/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/12896/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12896/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Add missing hidden files in webapp folder
> -
>
> Key: YARN-5503
> URL: 

[jira] [Commented] (YARN-5540) scheduler spends too much time looking at empty priorities

2016-08-25 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437417#comment-15437417
 ] 

Arun Suresh commented on YARN-5540:
---

bq. Unless I'm missing something it's still not handling it. Activation will 
only occur if the ANY request numContainers > 0 because we won't go 
Aah.. true, I mistook the lastRequestContainers for the numContainers. I guess 
the TODO should be moved before the
{{if (request.getNumContainers() <= 0)}}

bq. The concurrent task limiting feature of MAPREDUCE-5583 is one example that 
leverages this.
Thanks for the explanation. While this seems like a really cool way of solving 
the limiting problem. It is in my opinion leveraging what is an un-documented 
API (the fact that queue demand is updated only with the ANY request). For 
instance It is not even possible to do this using the AMRMClient. One way to do 
this might be to leverage the YARN Reservation System which allows you to 
specify task parallelism makes it possible by adjusting the queues dynamically 
- but we can discuss this outside of this JIRA.

bq. There are cases when we want to remove the scheduler key from the 
collection but not remove the map of requests that go with that key
Looks like the YARN-1651 does the opposite as well...





> scheduler spends too much time looking at empty priorities
> --
>
> Key: YARN-5540
> URL: https://issues.apache.org/jira/browse/YARN-5540
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, fairscheduler, resourcemanager
>Affects Versions: 2.7.2
>Reporter: Nathan Roberts
>Assignee: Jason Lowe
> Attachments: YARN-5540.001.patch
>
>
> We're starting to see the capacity scheduler run out of scheduling horsepower 
> when running 500-1000 applications on clusters with 4K nodes or so.
> This seems to be amplified by TEZ applications. TEZ applications have many 
> more priorities (sometimes in the hundreds) than typical MR applications and 
> therefore the loop in the scheduler which examines every priority within 
> every running application, starts to be a hotspot. The priorities appear to 
> stay around forever, even when there is no remaining resource request at that 
> priority causing us to spend a lot of time looking at nothing.
> jstack snippet:
> {noformat}
> "ResourceManager Event Processor" #28 prio=5 os_prio=0 tid=0x7fc2d453e800 
> nid=0x22f3 runnable [0x7fc2a8be2000]
>java.lang.Thread.State: RUNNABLE
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.getResourceRequest(SchedulerApplicationAttempt.java:210)
> - eliminated <0x0005e73e5dc0> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(LeafQueue.java:852)
> - locked <0x0005e73e5dc0> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp)
> - locked <0x0003006fcf60> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainersToChildQueues(ParentQueue.java:527)
> - locked <0x0003001b22f8> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainers(ParentQueue.java:415)
> - locked <0x0003001b22f8> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1224)
> - locked <0x000300041e40> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5563) Add log messages for jobs in ACCEPTED state but not runnable.

2016-08-25 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-5563:
--

 Summary: Add log messages for jobs in ACCEPTED state but not 
runnable.
 Key: YARN-5563
 URL: https://issues.apache.org/jira/browse/YARN-5563
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: fairscheduler
Reporter: Yufei Gu
Assignee: Yufei Gu


Leaf queues maintain a list of runnable and non-runnable apps. FairScheduler 
marks an app non-runnable for different reasons: exceeding (1) queue max apps, 
(2) user max apps, (3) queue maxResources, (4) maxAMShare. It would be nice to 
log the reason an app isn't runnable. The first three are easy to infer, but 
the last one (maxAMShare) is particularly hard. It would be nice to log at 
least that.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Nathan Roberts (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437415#comment-15437415
 ] 

Nathan Roberts commented on YARN-5551:
--

I think the two examples you provided in the description are actually 2 very 
different cases. Notice how the first has an anonymous size of 0 while the 
second has the entire dirty region marked as anonymous. I think (not certain 
here), that this means in the first case the kernel actually has file-backed 
pages to write to if necessary. In the second case, I feel like anonymous means 
it does NOT have a place to put dirty pages (like maybe the file has been both 
truncated and unlinked). If that's a correct interpretation of "anonymous" then 
I feel like we should be counting the second mapping in the processes memory 
usage.



> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7f6123f99000-7f6163f99000 rw-p  08:41 211419477  
> /grid/4/hadoop/yarn/local/usercache/root/appcache/application_1466700718395_1249/container_e19_1466700718395_1249_01_03/7389389356021597290.cache
>  (deleted)
> Size:1048576 kB
> Rss:  637292 kB
> Pss:  637292 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean: 0 kB
> Private_Dirty:637292 kB
> Referenced:   637292 kB
> Anonymous:637292 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5549) AMLauncher.createAMContainerLaunchContext() should not log the command to be launched indiscriminately

2016-08-25 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-5549:
---
Attachment: YARN-5549.001.patch

Here's a patch.  I decided that a separate logger wouldn't make sense unless it 
were for the whole of the {{AMLauncher}} class, which is superfluous since the 
loggers can be configured at the class level.  Configuring the {{AMLauncher}} 
logger not to log is too heavy-handed of a solution, though, so this JIRA is 
still needed.

In the case that the command line logging is disabled, I still log a message, 
just without the risky data, to minimize admin confusion.

I also did a tiny bit of cleanup.  I can't help myself.

> AMLauncher.createAMContainerLaunchContext() should not log the command to be 
> launched indiscriminately
> --
>
> Key: YARN-5549
> URL: https://issues.apache.org/jira/browse/YARN-5549
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5549.001.patch
>
>
> The command could contain sensitive information, such as keystore passwords 
> or AWS credentials or other.  Instead of logging it as INFO, we should log it 
> as DEBUG and include a property to disable logging it at all.  Logging it to 
> a different logger would also be viable and may create a smaller 
> administrative footprint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437397#comment-15437397
 ] 

Jason Lowe commented on YARN-5551:
--

bq. Actually, that's just a safety rail to cut down IO here - when the process 
exits, the deleted file pages just disappear.

True, but until that happens it acts just like an undeleted file unless I'm 
missing something.  The process exit case isn't interesting for purposes of 
accounting for how much memory the process is using right now.

> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7f6123f99000-7f6163f99000 rw-p  08:41 211419477  
> /grid/4/hadoop/yarn/local/usercache/root/appcache/application_1466700718395_1249/container_e19_1466700718395_1249_01_03/7389389356021597290.cache
>  (deleted)
> Size:1048576 kB
> Rss:  637292 kB
> Pss:  637292 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean: 0 kB
> Private_Dirty:637292 kB
> Referenced:   637292 kB
> Anonymous:637292 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-08-25 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437394#comment-15437394
 ] 

Varun Saxena commented on YARN-5561:


Sorry not really a full table scan. But a large scan.
[~rohithsharma], do you have any use case for number 4 ?

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4220) [Storage implementation] Support getEntities with only Application id but no flow and flow run ID

2016-08-25 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437381#comment-15437381
 ] 

Vrushali C commented on YARN-4220:
--

As discussed in today's call, we can close YARN-5562 as duplicate of this jira. 
[~rohithsharma] could you please also update this jira title and description as 
suitable?

> [Storage implementation] Support getEntities with only Application id but no 
> flow and flow run ID
> -
>
> Key: YARN-4220
> URL: https://issues.apache.org/jira/browse/YARN-4220
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: YARN-5355
>
> Currently we're enforcing flow and flowrun id to be non-null values on 
> {{getEntities}}. We can actually query the appToFlow table to figure out an 
> application's flow id and flowrun id if they're missing. This will simplify 
> normal queries. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-08-25 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437380#comment-15437380
 ] 

Varun Saxena commented on YARN-5561:


Number 4 I think should be avoided.
This would lead to full table scan and EntityTable can grow quite large.

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437379#comment-15437379
 ] 

Gopal V commented on YARN-5551:
---

bq.  I guess where I'm getting hung up is on the deleted part. Unless I'm 
mistaken, the OS isn't going to care whether the file is deleted or not when 
the process still has a mapping to it.

Actually, that's just a safety rail to cut down IO here - when the process 
exits, the deleted file pages just disappear.

bq. So in that sense I don't see why we're special-casing deleted files.

We can apply this patch to all file mappings actually - the special-casing was 
primarily to cut down the impact of the patch and reduce unintended 
consequences.

For non-deleted files, I'd like IO isolation as well (i.e the IO impact lasts 
past process-death), but that's a harder problem to solve into the 2.7.x branch 
(definitely to be tackled in 3.x and specifically for a modern cgroups setup).

> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7f6123f99000-7f6163f99000 rw-p  08:41 211419477  
> /grid/4/hadoop/yarn/local/usercache/root/appcache/application_1466700718395_1249/container_e19_1466700718395_1249_01_03/7389389356021597290.cache
>  (deleted)
> Size:1048576 kB
> Rss:  637292 kB
> Pss:  637292 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean: 0 kB
> Private_Dirty:637292 kB
> Referenced:   637292 kB
> Anonymous:637292 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4343) Need to support Application History Server on ATSV2

2016-08-25 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437377#comment-15437377
 ] 

Vrushali C commented on YARN-4343:
--


YARN-5561 proposes some new rest endpoints which might help with this jira. 

> Need to support Application History Server on ATSV2
> ---
>
> Key: YARN-4343
> URL: https://issues.apache.org/jira/browse/YARN-4343
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>  Labels: YARN-5355
>
> AHS is used by the CLI and Webproxy(REST), if the application related 
> information is not found in RM then it tries to fetch from AHS and show



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-08-25 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437375#comment-15437375
 ] 

Vrushali C commented on YARN-5561:
--

+ 1 on adding more rest endpoints. It makes things easier to query/script for. 
As such in the list above, perhaps 2 and 3 can be combined such that 3 becomes 
query params in 2.

As discussed in today's call, this jira might help with YARN-4343.

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5504) [YARN-3368] Fix YARN UI build pom.xml

2016-08-25 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5504:
--
Summary: [YARN-3368] Fix YARN UI build pom.xml  (was: [YARN-3368] Fix the 
YARN UI build pom.xml)

> [YARN-3368] Fix YARN UI build pom.xml
> -
>
> Key: YARN-5504
> URL: https://issues.apache.org/jira/browse/YARN-5504
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Attachments: YARN-5504-YARN-3368-0001.patch, 
> YARN-5504-YARN-3368-0002.patch
>
>
> - Disable tests as we don't have UTs.
> - Disable lint & hint as they are not followed by the current codebase, and 
> are throwing build errors.
> - Disable clearing of UI package on building, so that n/w is required only in 
> the first build.
> - Remove duplicate bower installs.
> -Change the default packaging.type to 'war' as our UI is a Web application- - 
> Will keep it in the profile
> -Final war should just contain the end result of the build and not all files-
> [~wangda] [~vinodkv] [~sunilg] please share your thoughts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5504) [YARN-3368] Fix the YARN UI build pom.xml

2016-08-25 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5504:
--
Summary: [YARN-3368] Fix the YARN UI build pom.xml  (was: [YARN-3368] Fix 
the YARN UI build)

> [YARN-3368] Fix the YARN UI build pom.xml
> -
>
> Key: YARN-5504
> URL: https://issues.apache.org/jira/browse/YARN-5504
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Attachments: YARN-5504-YARN-3368-0001.patch, 
> YARN-5504-YARN-3368-0002.patch
>
>
> - Disable tests as we don't have UTs.
> - Disable lint & hint as they are not followed by the current codebase, and 
> are throwing build errors.
> - Disable clearing of UI package on building, so that n/w is required only in 
> the first build.
> - Remove duplicate bower installs.
> -Change the default packaging.type to 'war' as our UI is a Web application- - 
> Will keep it in the profile
> -Final war should just contain the end result of the build and not all files-
> [~wangda] [~vinodkv] [~sunilg] please share your thoughts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437316#comment-15437316
 ] 

Jason Lowe commented on YARN-5551:
--

Special casing buffer cache pages is one thing, but I guess where I'm getting 
hung up is on the deleted part.  Unless I'm mistaken, the OS isn't going to 
care whether the file is deleted or not when the process still has a mapping to 
it.  Dirty pages will still be flushed to the store, and if the now clean page 
is discarded to make room for something else and the process comes back to 
touch it again, we need that updated stored data to recreate the page.  So in 
that sense I don't see why we're special-casing deleted files.



> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7f6123f99000-7f6163f99000 rw-p  08:41 211419477  
> /grid/4/hadoop/yarn/local/usercache/root/appcache/application_1466700718395_1249/container_e19_1466700718395_1249_01_03/7389389356021597290.cache
>  (deleted)
> Size:1048576 kB
> Rss:  637292 kB
> Pss:  637292 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean: 0 kB
> Private_Dirty:637292 kB
> Referenced:   637292 kB
> Anonymous:637292 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5504) [YARN-3368] Fix the YARN UI build

2016-08-25 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15436967#comment-15436967
 ] 

Sunil G edited comment on YARN-5504 at 8/25/16 5:46 PM:


I tested patch with {{keep-ui-build-cache}}, and it works fine.

However i am confused whether we need {{test}}. I think we can keep another 
ticket open for UT itself.
I think we must have some UT's and I ll help to get them in soon.

+1 for current patch, i will commit if there are no other objections. 


was (Author: sunilg):
I tested patch with {{keep-ui-build-cache}}, and it works fine.

However i am confused whether we need {{test}}. I think we can keep another 
ticket open for UT itself.
I think we must have some UT's and I ll help to get them in soon.

+1 for current patch, i will commit if there are no other objections. I will 
wait for a day.

> [YARN-3368] Fix the YARN UI build
> -
>
> Key: YARN-5504
> URL: https://issues.apache.org/jira/browse/YARN-5504
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Attachments: YARN-5504-YARN-3368-0001.patch, 
> YARN-5504-YARN-3368-0002.patch
>
>
> - Disable tests as we don't have UTs.
> - Disable lint & hint as they are not followed by the current codebase, and 
> are throwing build errors.
> - Disable clearing of UI package on building, so that n/w is required only in 
> the first build.
> - Remove duplicate bower installs.
> -Change the default packaging.type to 'war' as our UI is a Web application- - 
> Will keep it in the profile
> -Final war should just contain the end result of the build and not all files-
> [~wangda] [~vinodkv] [~sunilg] please share your thoughts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5503) [YARN-3368] Add missing hidden files in webapp folder

2016-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437277#comment-15437277
 ] 

Hadoop QA commented on YARN-5503:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 32s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
9s {color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 9s 
{color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 39s 
{color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
1s {color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 53s 
{color} | {color:green} YARN-3368 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 47s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 121m 56s 
{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s 
{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 184m 56s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:6068a84 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825476/YARN-5503-YARN-3368.0005.patch
 |
| JIRA Issue | YARN-5503 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux c1bc80840fb1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3368 / e6afd27 |
| Default Java | 1.8.0_101 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12893/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12893/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12893/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/12893/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12893/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Add missing hidden files in webapp folder
> -
>
> Key: YARN-5503
> URL: 

[jira] [Updated] (YARN-5549) AMLauncher.createAMContainerLaunchContext() should not log the command to be launched indiscriminately

2016-08-25 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-5549:
---
Summary: AMLauncher.createAMContainerLaunchContext() should not log the 
command to be launched indiscriminately  (was: createAMContainerLaunchContext() 
should not log the command to be launched indiscriminately)

> AMLauncher.createAMContainerLaunchContext() should not log the command to be 
> launched indiscriminately
> --
>
> Key: YARN-5549
> URL: https://issues.apache.org/jira/browse/YARN-5549
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
>
> The command could contain sensitive information, such as keystore passwords 
> or AWS credentials or other.  Instead of logging it as INFO, we should log it 
> as DEBUG and include a property to disable logging it at all.  Logging it to 
> a different logger would also be viable and may create a smaller 
> administrative footprint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437256#comment-15437256
 ] 

Chris Nauroth commented on YARN-5551:
-

OK, I get it now.  Thanks, [~gopalv].  I'd be fine proceeding with the change.  
I'm not online until after Labor Day, so I can't do a full code review, test 
and commit.  If anyone else wants to do it, please don't wait for me.

> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7f6123f99000-7f6163f99000 rw-p  08:41 211419477  
> /grid/4/hadoop/yarn/local/usercache/root/appcache/application_1466700718395_1249/container_e19_1466700718395_1249_01_03/7389389356021597290.cache
>  (deleted)
> Size:1048576 kB
> Rss:  637292 kB
> Pss:  637292 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean: 0 kB
> Private_Dirty:637292 kB
> Referenced:   637292 kB
> Anonymous:637292 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5389) TestYarnClient#testReservationDelete fails

2016-08-25 Thread Sean Po (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437252#comment-15437252
 ] 

Sean Po commented on YARN-5389:
---

Thanks for the review [~jlowe]!

> TestYarnClient#testReservationDelete fails
> --
>
> Key: YARN-5389
> URL: https://issues.apache.org/jira/browse/YARN-5389
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Rohith Sharma K S
>Assignee: Sean Po
>  Labels: test
> Fix For: 2.8.0
>
> Attachments: YARN-5389.v1.patch, YARN-5389.v2.patch, 
> YARN-5389.v3.patch, YARN-5389.v4.patch
>
>
> In build report 
> [report|https://builds.apache.org/job/PreCommit-YARN-Build/12341/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt],
>  below test fails. 
> {noformat}
> Tests run: 28, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 26.066 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.client.api.impl.TestYarnClient
> testReservationDelete(org.apache.hadoop.yarn.client.api.impl.TestYarnClient)  
> Time elapsed: 2.213 sec  <<< FAILURE!
> java.lang.AssertionError: Exhausted attempts in checking if node capacity was 
> added to the plan
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.setupMiniYARNCluster(TestYarnClient.java:1222)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.testReservationDelete(TestYarnClient.java:1584)
> testListReservationsByInvalidTimeInterval(org.apache.hadoop.yarn.client.api.impl.TestYarnClient)
>   Time elapsed: 2.215 sec  <<< FAILURE!
> java.lang.AssertionError: Exhausted attempts in checking if node capacity was 
> added to the plan
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.setupMiniYARNCluster(TestYarnClient.java:1222)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.testListReservationsByInvalidTimeInterval(TestYarnClient.java:1444)
> testListReservationsByTimeIntervalContainingNoReservations(org.apache.hadoop.yarn.client.api.impl.TestYarnClient)
>   Time elapsed: 2.206 sec  <<< FAILURE!
> java.lang.AssertionError: Exhausted attempts in checking if node capacity was 
> added to the plan
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.setupMiniYARNCluster(TestYarnClient.java:1222)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.testListReservationsByTimeIntervalContainingNoReservations(TestYarnClient.java:1494)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3926) Extend the YARN resource model for easier resource-type management and profiles

2016-08-25 Thread Xiaohua Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437248#comment-15437248
 ] 

Xiaohua Liang commented on YARN-3926:
-

Is there a set of readily available yarn configuration files (etc/hadoop/*.xml) 
that I can use to do some functionality testing on this branch ?

> Extend the YARN resource model for easier resource-type management and 
> profiles
> ---
>
> Key: YARN-3926
> URL: https://issues.apache.org/jira/browse/YARN-3926
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: Proposal for modifying resource model and profiles.pdf
>
>
> Currently, there are efforts to add support for various resource-types such 
> as disk(YARN-2139), network(YARN-2140), and  HDFS bandwidth(YARN-2681). These 
> efforts all aim to add support for a new resource type and are fairly 
> involved efforts. In addition, once support is added, it becomes harder for 
> users to specify the resources they need. All existing jobs have to be 
> modified, or have to use the minimum allocation.
> This ticket is a proposal to extend the YARN resource model to a more 
> flexible model which makes it easier to support additional resource-types. It 
> also considers the related aspect of “resource profiles” which allow users to 
> easily specify the various resources they need for any given container.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5557) Add localize API to the ContainerManagementProtocol

2016-08-25 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437208#comment-15437208
 ] 

Junping Du commented on YARN-5557:
--

v4 patch LGTM. +1. Will commit it tomorrow if no further comments.

> Add localize API to the ContainerManagementProtocol
> ---
>
> Key: YARN-5557
> URL: https://issues.apache.org/jira/browse/YARN-5557
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5557.1.patch, YARN-5557.2.patch, YARN-5557.3.patch, 
> YARN-5557.4.patch
>
>
> A new localize API for localizing new resources. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437166#comment-15437166
 ] 

Gopal V edited comment on YARN-5551 at 8/25/16 4:29 PM:


bq. The storage behind that mapping will not be freed even though the path has 
been deleted because this process still has an active mapping against it.

That's exactly the point - these are actually not memory pages, these are pages 
borrowed from the buffer-cache. Some of them are dirty and some of them are 
clean, which implies that they are not actually memory consumed by the process 
if there's any memory pressure.

The ideal mechanism for YARN to react would be to force a dirty flush for the 
specific process to reduce its memory footprint instead of always killing the 
process when the observed memory footprint is larger - killing a process is not 
the only way to reclaim memory from a process.

Operating purely with kill signals is genuinely overkill. 

This implementation is trying to be more forgiving of a process which has a 
large number of clean pages in memory backed by a disk cache file, which are 
available to the process via .read or .map, but the disk buffer pages used by 
the OS are counted differently by YARN if it uses .map().

The underlying reality is the same even for dirty pages as the writes are being 
buffered into the buffer cache anyway, except the write() syscall moves it out 
of the process space faster than an .mmap + msync.


was (Author: gopalv):
bq. The storage behind that mapping will not be freed even though the path has 
been deleted because this process still has an active mapping against it.

That's exactly the point - these are actually not memory pages, these are pages 
borrowed from the buffer-cache. Some of them are dirty and some of them are 
clean, which implies that they are not actually memory consumed by the process 
if there's any memory pressure.

The ideal mechanism for YARN to react would be to force a dirty flush for the 
specific process to reduce its memory footprint instead of always killing the 
process when the observed memory footprint is larger - killing a process is not 
the only way to reclaim memory from a process.

Operating purely with kill signals is genuinely overkill. 

This implementation is trying to be more forgiving of a process which has a 
large number of clean pages in memory backed by a disk cache file, which are 
available to the process via .read or .map, but the disk buffer pages used by 
the OS are counted differently by YARN if it uses .map().

> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7f6123f99000-7f6163f99000 rw-p  08:41 211419477  
> /grid/4/hadoop/yarn/local/usercache/root/appcache/application_1466700718395_1249/container_e19_1466700718395_1249_01_03/7389389356021597290.cache
>  (deleted)
> Size:1048576 kB
> Rss:  637292 kB
> Pss:  637292 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean: 0 kB
> Private_Dirty:637292 kB
> Referenced:   637292 kB
> Anonymous:637292 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437166#comment-15437166
 ] 

Gopal V commented on YARN-5551:
---

bq. The storage behind that mapping will not be freed even though the path has 
been deleted because this process still has an active mapping against it.

That's exactly the point - these are actually not memory pages, these are pages 
borrowed from the buffer-cache. Some of them are dirty and some of them are 
clean, which implies that they are not actually memory consumed by the process 
if there's any memory pressure.

The ideal mechanism for YARN to react would be to force a dirty flush for the 
specific process to reduce its memory footprint instead of always killing the 
process when the observed memory footprint is larger - killing a process is not 
the only way to reclaim memory from a process.

Operating purely with kill signals is genuinely overkill. 

This implementation is trying to be more forgiving of a process which has a 
large number of clean pages in memory backed by a disk cache file, which are 
available to the process via .read or .map, but the disk buffer pages used by 
the OS are counted differently by YARN if it uses .map().

> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7f6123f99000-7f6163f99000 rw-p  08:41 211419477  
> /grid/4/hadoop/yarn/local/usercache/root/appcache/application_1466700718395_1249/container_e19_1466700718395_1249_01_03/7389389356021597290.cache
>  (deleted)
> Size:1048576 kB
> Rss:  637292 kB
> Pss:  637292 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean: 0 kB
> Private_Dirty:637292 kB
> Referenced:   637292 kB
> Anonymous:637292 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5557) Add localize API to the ContainerManagementProtocol

2016-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437164#comment-15437164
 ] 

Hadoop QA commented on YARN-5557:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 48s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 54s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 28s 
{color} | {color:red} root: The patch generated 7 new + 271 unchanged - 0 fixed 
= 278 total (was 271) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 17s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 8s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 37m 17s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 28s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 113m 40s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825479/YARN-5557.4.patch |
| JIRA Issue | YARN-5557 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux f6fd1117302e 3.13.0-92-generic #139-Ubuntu SMP 

[jira] [Commented] (YARN-5486) Update OpportunisticContainerAllocatorAMService::allocate method to handle OPPORTUNISTIC container requests

2016-08-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437135#comment-15437135
 ] 

Hadoop QA commented on YARN-5486:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 2m 14s {color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 2 new + 35 unchanged - 
0 fixed = 37 total (was 35) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 36s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 30 
new + 86 unchanged - 0 fixed = 116 total (was 86) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 14s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 48s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 28s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 52s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
|   | 
hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService 
|
|   | hadoop.yarn.client.api.impl.TestYarnClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12825477/YARN-5486.001.patch |
| JIRA Issue | YARN-5486 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 92bd012dc360 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Commented] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437095#comment-15437095
 ] 

Chris Nauroth commented on YARN-5551:
-

My understanding agrees with Jason's last comment.  The mapping could last well 
past the deletion of the underlying file, maybe even for the whole lifetime of 
the process, so it's correct to include it in the accounting.

> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7f6123f99000-7f6163f99000 rw-p  08:41 211419477  
> /grid/4/hadoop/yarn/local/usercache/root/appcache/application_1466700718395_1249/container_e19_1466700718395_1249_01_03/7389389356021597290.cache
>  (deleted)
> Size:1048576 kB
> Rss:  637292 kB
> Pss:  637292 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean: 0 kB
> Private_Dirty:637292 kB
> Referenced:   637292 kB
> Anonymous:637292 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5551) Ignore deleted file mapping from memory computation when smaps is enabled

2016-08-25 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437067#comment-15437067
 ] 

Jason Lowe commented on YARN-5551:
--

The "deleted" here refers to the fact that the file path no longer exists, but 
the mapping is still valid.  Even though the file path no longer exists the 
process really is still using the memory described in that section of the smaps 
output.  Therefore it is correct to account for that memory usage against the 
process.  The storage behind that mapping will _not_ be freed even though the 
path has been deleted because this process still has an active mapping against 
it.

IMHO this should be closed as invalid.


> Ignore deleted file mapping from memory computation when smaps is enabled
> -
>
> Key: YARN-5551
> URL: https://issues.apache.org/jira/browse/YARN-5551
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: YARN-5551.branch-2.001.patch
>
>
> Currently deleted file mappings are also included in the memory computation 
> when SMAP is enabled. For e.g
> {noformat}
> 7f612004a000-7f612004c000 rw-s  00:10 4201507513 
> /dev/shm/HadoopShortCircuitShm_DFSClient_NONMAPREDUCE_-521969216_162_734673185
>  (deleted)
> Size:  8 kB
> Rss:   4 kB
> Pss:   2 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  4 kB
> Private_Clean: 0 kB
> Private_Dirty: 0 kB
> Referenced:4 kB
> Anonymous: 0 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> MMUPageSize:   4 kB
> 7f6123f99000-7f6163f99000 rw-p  08:41 211419477  
> /grid/4/hadoop/yarn/local/usercache/root/appcache/application_1466700718395_1249/container_e19_1466700718395_1249_01_03/7389389356021597290.cache
>  (deleted)
> Size:1048576 kB
> Rss:  637292 kB
> Pss:  637292 kB
> Shared_Clean:  0 kB
> Shared_Dirty:  0 kB
> Private_Clean: 0 kB
> Private_Dirty:637292 kB
> Referenced:   637292 kB
> Anonymous:637292 kB
> AnonHugePages: 0 kB
> Swap:  0 kB
> KernelPageSize:4 kB
> {noformat}
> It would be good to exclude these from getSmapBasedRssMemorySize() 
> computation.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5430) Return container's ip and host from NM ContainerStatus call

2016-08-25 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437050#comment-15437050
 ] 

Billie Rinaldi commented on YARN-5430:
--

+1, this fixes the issues I was seeing.

> Return container's ip and host from NM ContainerStatus call
> ---
>
> Key: YARN-5430
> URL: https://issues.apache.org/jira/browse/YARN-5430
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5430-branch-2.patch, YARN-5430.1.patch, 
> YARN-5430.2.patch, YARN-5430.3.patch, YARN-5430.4.patch, YARN-5430.5.patch, 
> YARN-5430.6.patch, YARN-5430.7.patch, YARN-5430.8.patch, 
> YARN-5430.9.branch-2.patch, YARN-5430.9.patch
>
>
> In YARN-4757, we introduced a DNS mechanism for containers. That's based on 
> the assumption  that we can get the container's ip and host information and 
> store it in the registry-service. This jira aims to get the container's ip 
> and host from the NM, primarily docker container



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5389) TestYarnClient#testReservationDelete fails

2016-08-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437033#comment-15437033
 ] 

Hudson commented on YARN-5389:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10345 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10345/])
YARN-5389. TestYarnClient#testReservationDelete fails. Contributed by (jlowe: 
rev 3d86110a5ccfdaff8671fb6ad8f67b4ab66f33da)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestYarnClient.java


> TestYarnClient#testReservationDelete fails
> --
>
> Key: YARN-5389
> URL: https://issues.apache.org/jira/browse/YARN-5389
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Rohith Sharma K S
>Assignee: Sean Po
>  Labels: test
> Attachments: YARN-5389.v1.patch, YARN-5389.v2.patch, 
> YARN-5389.v3.patch, YARN-5389.v4.patch
>
>
> In build report 
> [report|https://builds.apache.org/job/PreCommit-YARN-Build/12341/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt],
>  below test fails. 
> {noformat}
> Tests run: 28, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 26.066 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.client.api.impl.TestYarnClient
> testReservationDelete(org.apache.hadoop.yarn.client.api.impl.TestYarnClient)  
> Time elapsed: 2.213 sec  <<< FAILURE!
> java.lang.AssertionError: Exhausted attempts in checking if node capacity was 
> added to the plan
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.setupMiniYARNCluster(TestYarnClient.java:1222)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.testReservationDelete(TestYarnClient.java:1584)
> testListReservationsByInvalidTimeInterval(org.apache.hadoop.yarn.client.api.impl.TestYarnClient)
>   Time elapsed: 2.215 sec  <<< FAILURE!
> java.lang.AssertionError: Exhausted attempts in checking if node capacity was 
> added to the plan
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.setupMiniYARNCluster(TestYarnClient.java:1222)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.testListReservationsByInvalidTimeInterval(TestYarnClient.java:1444)
> testListReservationsByTimeIntervalContainingNoReservations(org.apache.hadoop.yarn.client.api.impl.TestYarnClient)
>   Time elapsed: 2.206 sec  <<< FAILURE!
> java.lang.AssertionError: Exhausted attempts in checking if node capacity was 
> added to the plan
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.setupMiniYARNCluster(TestYarnClient.java:1222)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.testListReservationsByTimeIntervalContainingNoReservations(TestYarnClient.java:1494)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4805) Don't go through all schedulers in ParameterizedTestBase

2016-08-25 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437014#comment-15437014
 ] 

Arun Suresh commented on YARN-4805:
---

bq. In hind sight, it likely would have sufficed to automatically create this 
allocations file it did not exist.
Doesn't the {{configureFairScheduler()}} method in the 
_ParametrizedSchedulerTestBase_ do this already ? But yes, I agree specific 
tests might need specific configuration. I see that the 
{{TestReservationSystem}} does a good job of configuring the FS specifically 
for each test case.

bq. MockRM/RM, which might not be scheduler dependent at all..
I agree.

bq. I am okay with reverting this, and instead updating FairScheduler to create 
the allocations file. We could shrink the number of tests that extend 
ParameterizedTests to only those that are scheduler dependent.
Maybe as a first step, you can revert and then remove the Scheduler.FAIR from 
the static parameters. We can then find a way to pass these parameters 
externally.

> Don't go through all schedulers in ParameterizedTestBase
> 
>
> Key: YARN-4805
> URL: https://issues.apache.org/jira/browse/YARN-4805
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 2.9.0
>
> Attachments: yarn-4805-1.patch
>
>
> ParameterizedSchedulerTestBase was created to make sure tests that were 
> written with CapacityScheduler in mind don't fail when run against 
> FairScheduler. Before this was introduced, tests would fail because 
> FairScheduler requires an allocation file. 
> However, the tests that extend it take about 10 minutes per scheduler. So, 
> instead of running against both schedulers, we could setup the scheduler 
> appropriately so the tests pass against both schedulers. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5389) TestYarnClient#testReservationDelete fails

2016-08-25 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-5389:
-
Summary: TestYarnClient#testReservationDelete fails  (was: 
TestYarnClient#testReservationDelete fails in trunk)

> TestYarnClient#testReservationDelete fails
> --
>
> Key: YARN-5389
> URL: https://issues.apache.org/jira/browse/YARN-5389
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Rohith Sharma K S
>Assignee: Sean Po
>  Labels: test
> Attachments: YARN-5389.v1.patch, YARN-5389.v2.patch, 
> YARN-5389.v3.patch, YARN-5389.v4.patch
>
>
> In build report 
> [report|https://builds.apache.org/job/PreCommit-YARN-Build/12341/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt],
>  below test fails. 
> {noformat}
> Tests run: 28, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 26.066 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.client.api.impl.TestYarnClient
> testReservationDelete(org.apache.hadoop.yarn.client.api.impl.TestYarnClient)  
> Time elapsed: 2.213 sec  <<< FAILURE!
> java.lang.AssertionError: Exhausted attempts in checking if node capacity was 
> added to the plan
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.setupMiniYARNCluster(TestYarnClient.java:1222)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.testReservationDelete(TestYarnClient.java:1584)
> testListReservationsByInvalidTimeInterval(org.apache.hadoop.yarn.client.api.impl.TestYarnClient)
>   Time elapsed: 2.215 sec  <<< FAILURE!
> java.lang.AssertionError: Exhausted attempts in checking if node capacity was 
> added to the plan
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.setupMiniYARNCluster(TestYarnClient.java:1222)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.testListReservationsByInvalidTimeInterval(TestYarnClient.java:1444)
> testListReservationsByTimeIntervalContainingNoReservations(org.apache.hadoop.yarn.client.api.impl.TestYarnClient)
>   Time elapsed: 2.206 sec  <<< FAILURE!
> java.lang.AssertionError: Exhausted attempts in checking if node capacity was 
> added to the plan
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.setupMiniYARNCluster(TestYarnClient.java:1222)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.testListReservationsByTimeIntervalContainingNoReservations(TestYarnClient.java:1494)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5389) TestYarnClient#testReservationDelete fails in trunk

2016-08-25 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15437008#comment-15437008
 ] 

Jason Lowe commented on YARN-5389:
--

+1 for the latest patch.  Committing this.


> TestYarnClient#testReservationDelete fails in trunk
> ---
>
> Key: YARN-5389
> URL: https://issues.apache.org/jira/browse/YARN-5389
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Rohith Sharma K S
>Assignee: Sean Po
>  Labels: test
> Attachments: YARN-5389.v1.patch, YARN-5389.v2.patch, 
> YARN-5389.v3.patch, YARN-5389.v4.patch
>
>
> In build report 
> [report|https://builds.apache.org/job/PreCommit-YARN-Build/12341/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt],
>  below test fails. 
> {noformat}
> Tests run: 28, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 26.066 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.client.api.impl.TestYarnClient
> testReservationDelete(org.apache.hadoop.yarn.client.api.impl.TestYarnClient)  
> Time elapsed: 2.213 sec  <<< FAILURE!
> java.lang.AssertionError: Exhausted attempts in checking if node capacity was 
> added to the plan
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.setupMiniYARNCluster(TestYarnClient.java:1222)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.testReservationDelete(TestYarnClient.java:1584)
> testListReservationsByInvalidTimeInterval(org.apache.hadoop.yarn.client.api.impl.TestYarnClient)
>   Time elapsed: 2.215 sec  <<< FAILURE!
> java.lang.AssertionError: Exhausted attempts in checking if node capacity was 
> added to the plan
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.setupMiniYARNCluster(TestYarnClient.java:1222)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.testListReservationsByInvalidTimeInterval(TestYarnClient.java:1444)
> testListReservationsByTimeIntervalContainingNoReservations(org.apache.hadoop.yarn.client.api.impl.TestYarnClient)
>   Time elapsed: 2.206 sec  <<< FAILURE!
> java.lang.AssertionError: Exhausted attempts in checking if node capacity was 
> added to the plan
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.setupMiniYARNCluster(TestYarnClient.java:1222)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestYarnClient.testListReservationsByTimeIntervalContainingNoReservations(TestYarnClient.java:1494)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1503) Support making additional 'LocalResources' available to running containers

2016-08-25 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15436983#comment-15436983
 ] 

Junping Du commented on YARN-1503:
--

bq.  Anyway, these are advanced stuff and not conflicting with core change. 
I'll open separate jira and talk about how to implement it when it comes.
Sure. The plan sound good to me.

Just a notification for guys on watching this jira, I am starting to review the 
first patch (YARN-5557) under this umbrella which should be irrelevant of our 
discussions above. Please let me know if you have any concerns.

> Support making additional 'LocalResources' available to running containers
> --
>
> Key: YARN-1503
> URL: https://issues.apache.org/jira/browse/YARN-1503
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Jian He
> Attachments: Continuous-resource-localization.pdf
>
>
> We have a use case, where additional resources (jars, libraries etc) need to 
> be made available to an already running container. Ideally, we'd like this to 
> be done via YARN (instead of having potentially multiple containers per node 
> download resources on their own).
> Proposal:
>   NM to support an additional API where a list of resources can be specified. 
> Something like "localiceResource(ContainerId, Map)
>   NM would also require an additional API to get state for these resources - 
> "getLocalizationState(ContainerId)" - which returns the current state of all 
> local resources for the specified container(s).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5557) Add localize API to the ContainerManagementProtocol

2016-08-25 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5557:
--
Attachment: YARN-5557.4.patch

Latest patch removed one unused import

> Add localize API to the ContainerManagementProtocol
> ---
>
> Key: YARN-5557
> URL: https://issues.apache.org/jira/browse/YARN-5557
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5557.1.patch, YARN-5557.2.patch, YARN-5557.3.patch, 
> YARN-5557.4.patch
>
>
> A new localize API for localizing new resources. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5557) Add localize API to the ContainerManagementProtocol

2016-08-25 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15436970#comment-15436970
 ] 

Junping Du commented on YARN-5557:
--

+1 after fix this warning.

> Add localize API to the ContainerManagementProtocol
> ---
>
> Key: YARN-5557
> URL: https://issues.apache.org/jira/browse/YARN-5557
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5557.1.patch, YARN-5557.2.patch, YARN-5557.3.patch
>
>
> A new localize API for localizing new resources. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5486) Update OpportunisticContainerAllocatorAMService::allocate method to handle OPPORTUNISTIC container requests

2016-08-25 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5486:
--
Attachment: YARN-5486.001.patch

Attaching initial patch based on internal prototype by [~kkaranasos] and 
integrating with YARN-5457.

feedback welcome..

> Update OpportunisticContainerAllocatorAMService::allocate method to handle 
> OPPORTUNISTIC container requests
> ---
>
> Key: YARN-5486
> URL: https://issues.apache.org/jira/browse/YARN-5486
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5486.001.patch
>
>
> YARN-5457 refactors the Distributed Scheduling framework to move the 
> container allocator to yarn-server-common.
> This JIRA proposes to update the allocate method in the new AM service to use 
> the OpportunisticContainerAllocator to allocate opportunistic containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5504) [YARN-3368] Fix the YARN UI build

2016-08-25 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15436967#comment-15436967
 ] 

Sunil G commented on YARN-5504:
---

I tested patch with {{keep-ui-build-cache}}, and it works fine.

However i am confused whether we need {{test}}. I think we can keep another 
ticket open for UT itself.
I think we must have some UT's and I ll help to get them in soon.

+1 for current patch, i will commit if there are no other objections. I will 
wait for a day.

> [YARN-3368] Fix the YARN UI build
> -
>
> Key: YARN-5504
> URL: https://issues.apache.org/jira/browse/YARN-5504
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
> Attachments: YARN-5504-YARN-3368-0001.patch, 
> YARN-5504-YARN-3368-0002.patch
>
>
> - Disable tests as we don't have UTs.
> - Disable lint & hint as they are not followed by the current codebase, and 
> are throwing build errors.
> - Disable clearing of UI package on building, so that n/w is required only in 
> the first build.
> - Remove duplicate bower installs.
> -Change the default packaging.type to 'war' as our UI is a Web application- - 
> Will keep it in the profile
> -Final war should just contain the end result of the build and not all files-
> [~wangda] [~vinodkv] [~sunilg] please share your thoughts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5557) Add localize API to the ContainerManagementProtocol

2016-08-25 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15436946#comment-15436946
 ] 

Junping Du commented on YARN-5557:
--

I just check the checkstyle warnings. I agree most of them belongs to noise. 
However, we should at least fix warning below. Isn't it?
{noformat}
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/impl/pb/service/ContainerManagementProtocolPBServiceImpl.java:45:import
 org.apache.hadoop.yarn.proto.YarnServiceProtos;:8: Unused import - 
org.apache.hadoop.yarn.proto.YarnServiceProtos
{noformat}

> Add localize API to the ContainerManagementProtocol
> ---
>
> Key: YARN-5557
> URL: https://issues.apache.org/jira/browse/YARN-5557
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5557.1.patch, YARN-5557.2.patch, YARN-5557.3.patch
>
>
> A new localize API for localizing new resources. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >