[jira] [Updated] (YARN-8201) Skip stacktrace of ApplicationNotFoundException at server side

2018-05-03 Thread Bilwa S T (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-8201:

Attachment: YARN-8201-003.patch

> Skip stacktrace of ApplicationNotFoundException at server side
> --
>
> Key: YARN-8201
> URL: https://issues.apache.org/jira/browse/YARN-8201
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Minor
> Attachments: YARN-8201-001.patch, YARN-8201-002.patch, 
> YARN-8201-003.patch
>
>
> Currently full stack trace of exception like 
> ApplicationNotFoundException,ApplicationAttemptNotFoundException etc  are 
> logged at server side..Wrong client operation could increase server logs.
> {{Server.addTerseExceptions}} could be used to reduce server side logs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8080) YARN native service should support component restart policy

2018-05-03 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463097#comment-16463097
 ] 

Suma Shivaprasad edited comment on YARN-8080 at 5/4/18 5:04 AM:


Thanks [~gsaha] for reviews and offline discussions on the patch.  While 
testing the flex scenarios as suggested by [~gsaha] with the patch, ran into 
the following issues.


What does flexing  up/down a component with "restart_policy" : NEVER/ 
"restart_policy: ON_FAILURE mean?

Consider the following scenario where a component has 4 instances configured 
and restart_policy="NEVER". Assume that 2 of these containers have exited 
successfully after execution and 2 are still running.

1. Flex up
Now if the user , flexes the number of containers to 3, should we even support 
flexing up of containers in this case? For eg: It could be a Tensorflow DAG - 
YARN-8135 in which flexing up may or may not make sense unless the Tensorflow 
client needs more resources and is able to make use of the newly allocated 
containers  (like the dynamic allocation usecase in SPARK ).  [~leftnoteasy] 
could comment on this. We could add support for a flag in the YARN service spec 
to disallow/allow flexing for services and user can choose to disallow this for 
specific apps.

2. Flex down
Also flex down for such services needs to consider the current number of 
running containers (instead of configured number of containers which is the 
behaviour currently) and scale them down accordingly. For eg: if component 
instance during flex is set to 1, bring down the number of running containers 
to 1.

[~billie.rinaldi] [~leftnoteasy] [~gsaha] [~eyang] Thoughts?










was (Author: suma.shivaprasad):
Thanks [~gsaha] for reviews and offline discussions on the patch.  While 
testing the flex scenarios as suggested by [~gsaha] with the patch, ran into 
the following issues.


What does flexing  up/down a component with "restart_policy" : NEVER/ 
"restart_policy: ON_FAILURE mean?

Consider the following scenario where a component has 4 instances configured 
and restart_policy="NEVER". Assume that 2 of these containers have exited 
successfully after execution and 2 are still running.

1. Flex up
Now if the user , flexes the number of containers to 3, should we even support 
flexing up of containers in this case? For eg: It could be a Tensorflow DAG - 
YARN-8135 in which flexing up may or may not make sense unless the Tensorflow 
client needs more resources is able to make use of the newly allocated 
containers  (like the dynamic allocation usecase in SPARK ).  [~leftnoteasy] 
could comment on this. We could add support for a flag in the YARN service spec 
to disallow/allow flexing for services and user can choose to disallow this for 
specific apps.

2. Flex down
Also flex down for such services needs to consider the current number of 
running containers (instead of configured number of containers which is the 
behaviour currently) and scale them down accordingly. For eg: if component 
instance during flex is set to 1, bring down the number of running containers 
to 1.

[~billie.rinaldi] [~leftnoteasy] [~gsaha] [~eyang] Thoughts?









> YARN native service should support component restart policy
> ---
>
> Key: YARN-8080
> URL: https://issues.apache.org/jira/browse/YARN-8080
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: YARN-8080.001.patch, YARN-8080.002.patch, 
> YARN-8080.003.patch, YARN-8080.005.patch, YARN-8080.006.patch, 
> YARN-8080.007.patch
>
>
> Existing native service assumes the service is long running and never 
> finishes. Containers will be restarted even if exit code == 0. 
> To support boarder use cases, we need to allow restart policy of component 
> specified by users. Propose to have following policies:
> 1) Always: containers always restarted by framework regardless of container 
> exit status. This is existing/default behavior.
> 2) Never: Do not restart containers in any cases after container finishes: To 
> support job-like workload (for example Tensorflow training job). If a task 
> exit with code == 0, we should not restart the task. This can be used by 
> services which is not restart/recovery-able.
> 3) On-failure: Similar to above, only restart task with exitcode != 0. 
> Behaviors after component *instance* finalize (Succeeded or Failed when 
> restart_policy != ALWAYS): 
> 1) For single component, single instance: complete service.
> 2) For single component, multiple instance: other running instances from the 
> same component won't be affected by the finalized component instance. Service 
> will be terminated once all instances finalized. 
> 3) For multiple components: Service will be 

[jira] [Commented] (YARN-8234) Improve RM system metrics publisher's performance by pushing events to timeline server in batch

2018-05-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463364#comment-16463364
 ] 

genericqa commented on YARN-8234:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
59s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2.8.3 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
18s{color} | {color:green} branch-2.8.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m  
6s{color} | {color:green} branch-2.8.3 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} branch-2.8.3 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} branch-2.8.3 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
0s{color} | {color:green} branch-2.8.3 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} branch-2.8.3 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 59s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 9 new + 218 unchanged - 3 fixed = 227 total (was 221) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
21s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 49s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 45s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
22s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Synchronization performed on java.util.concurrent.LinkedBlockingQueue in 
org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher.putEntity(TimelineEntity)
  At 
SystemMetricsPublisher.java:org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher.putEntity(TimelineEntity)
  At SystemMetricsPublisher.java:[line 599] |
|  |  Possible null pointer dereference of response in 
org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher$SendEntity.run()
 on exception path  Dereferenced at SystemMetricsPublisher.java:response in 
org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher$SendEntity.run()
 on exception path  Dereferenced at SystemMetricsPublisher.java:[line 631] 

[jira] [Commented] (YARN-8234) Improve RM system metrics publisher's performance by pushing events to timeline server in batch

2018-05-03 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463331#comment-16463331
 ] 

Wangda Tan commented on YARN-8234:
--

[~ziqian hu], mind to check the Jenkins report?

> Improve RM system metrics publisher's performance by pushing events to 
> timeline server in batch
> ---
>
> Key: YARN-8234
> URL: https://issues.apache.org/jira/browse/YARN-8234
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager, timelineserver
>Affects Versions: 2.8.3
>Reporter: Hu Ziqian
>Assignee: Hu Ziqian
>Priority: Major
> Attachments: YARN-8234-branch-2.8.3.001.patch
>
>
> When system metrics publisher is enabled, RM will push events to timeline 
> server via restful api. If the cluster load is heavy, many events are sent to 
> timeline server and the timeline server's event handler thread locked. 
> YARN-7266 talked about the detail of this problem. Because of the lock, 
> timeline server can't receive event as fast as it generated in RM and lots of 
> timeline event stays in RM's memory. Finally, those events will consume all 
> RM's memory and RM will start a full gc (which cause an JVM stop-world and 
> cause a timeout from rm to zookeeper) or even get an OOM. 
> The main problem here is that timeline can't receive timeline server's event 
> as fast as it generated. Now, RM system metrics publisher put only one event 
> in a request, and most time costs on handling http header or some thing about 
> the net connection on timeline side. Only few time is spent on dealing with 
> the timeline event which is truly valuable.
> In this issue, we add a buffer in system metrics publisher and let publisher 
> send events to timeline server in batch via one request. When sets the batch 
> size to 1000, in out experiment the speed of the timeline server receives 
> events has 100x improvement. We have implement this function int our product 
> environment which accepts 2 app's in one hour and it works fine.
> We add following configuration:
>  * yarn.resourcemanager.system-metrics-publisher.batch-size: the size of 
> system metrics publisher sending events in one request. Default value is 1000
>  * yarn.resourcemanager.system-metrics-publisher.buffer-size: the size of the 
> event buffer in system metrics publisher.
>  * yarn.resourcemanager.system-metrics-publisher.interval-seconds: When 
> enable batch publishing, we must avoid that the publisher waits for a batch 
> to be filled up and hold events in buffer for long time. So we add another 
> thread which send event's in the buffer periodically. This config sets the 
> interval of the cyclical sending thread. The default value is 60s.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8232) RMContainer lost queue name when RM HA happens

2018-05-03 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463329#comment-16463329
 ] 

Wangda Tan commented on YARN-8232:
--

+1, thanks [~ziqian hu], will commit tomorrow if no objections.

> RMContainer lost queue name when RM HA happens
> --
>
> Key: YARN-8232
> URL: https://issues.apache.org/jira/browse/YARN-8232
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.3
>Reporter: Hu Ziqian
>Assignee: Hu Ziqian
>Priority: Major
> Attachments: YARN-8232-branch-2.8.3.001.patch, YARN-8232.001.patch, 
> YARN-8232.002.patch, YARN-8232.003.patch
>
>
> RMContainer has a member variable queuename to store which queue the 
> container belongs to. When RM HA happens and RMContainers are recovered by 
> scheduler based on NM reports, the queue name isn't recovered and always be 
> null.
> This situation causes some problems. Here is a case in preemption. Preemption 
> uses container's queue name to deduct preemptable resources when we use more 
> than one preempt selector, (for example, enable intra-queue preemption,) . 
> The detail is in
> {code:java}
> CapacitySchedulerPreemptionUtils.deductPreemptableResourcesBasedSelectedCandidates(){code}
> If the contain's queue name is null, this function will throw a 
> YarnRuntimeException because it tries to get the container's 
> TempQueuePerPartition and the preemption fails.
> Our patch solved this problem by setting container queue name when recover 
> containers. The patch is based on branch-2.8.3.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4606) CapacityScheduler: applications could get starved because computation of #activeUsers considers pending apps

2018-05-03 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463324#comment-16463324
 ] 

Wangda Tan commented on YARN-4606:
--

Thanks [~maniraj...@gmail.com], 
Some questions: 

1) Does this patch handles the case that one user has multiple pending apps? 
(Since it doesn't store user to apps information).

2) 
{code}
abstractUsersManager.decrNumActiveUsersOfPendingApps(); 
{code}
Should we call this inside 
{{SchedulerApplicationAttempt#pullNewlyUpdatedContainers}}? 
I think we should remove active user from pending apps once AM container get 
allocated.

3)
{code} 
Resources.lessThan(rc, cr,
metrics.getUsedAMResources(), metrics.getMaxAMResources())
{code} 
Instead of using metrics, it might be better to use 
{{SchedulerApplicationAttempt#getAppAttemptResourceUsage}} instead. 

> CapacityScheduler: applications could get starved because computation of 
> #activeUsers considers pending apps 
> -
>
> Key: YARN-4606
> URL: https://issues.apache.org/jira/browse/YARN-4606
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler
>Affects Versions: 2.8.0, 2.7.1
>Reporter: Karam Singh
>Assignee: Manikandan R
>Priority: Critical
> Attachments: YARN-4606.001.patch, YARN-4606.1.poc.patch, 
> YARN-4606.POC.2.patch, YARN-4606.POC.patch
>
>
> Currently, if all applications belong to same user in LeafQueue are pending 
> (caused by max-am-percent, etc.), ActiveUsersManager still considers the user 
> is an active user. This could lead to starvation of active applications, for 
> example:
> - App1(belongs to user1)/app2(belongs to user2) are active, app3(belongs to 
> user3)/app4(belongs to user4) are pending
> - ActiveUsersManager returns #active-users=4
> - However, there're only two users (user1/user2) are able to allocate new 
> resources. So computed user-limit-resource could be lower than expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8223) ClassNotFoundException when auxiliary service is loaded from HDFS

2018-05-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463311#comment-16463311
 ] 

genericqa commented on YARN-8223:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 47s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
3s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}151m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8223 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921866/YARN-8223.002.patch |
| Optional Tests |  asflicense  mvnsite  compile  javac  javadoc  mvninstall  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 465bca99a466 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | 

[jira] [Commented] (YARN-4599) Set OOM control for memory cgroups

2018-05-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463303#comment-16463303
 ] 

genericqa commented on YARN-4599:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 35m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 20m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
34m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 28m 28s{color} | 
{color:red} root generated 1 new + 11 unchanged - 0 fixed = 12 total (was 11) 
{color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 35s{color} | {color:orange} root: The patch generated 17 new + 212 unchanged 
- 1 fixed = 229 total (was 213) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 24m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site . {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
20s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 14s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}237m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 

[jira] [Commented] (YARN-8113) Update placement constraints doc with application namespaces and inter-app constraints

2018-05-03 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463288#comment-16463288
 ] 

Weiwei Yang commented on YARN-8113:
---

Thank you [~kkaranasos] for helping the review and polishing the text! 
Appreciate that!

> Update placement constraints doc with application namespaces and inter-app 
> constraints
> --
>
> Key: YARN-8113
> URL: https://issues.apache.org/jira/browse/YARN-8113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: AllocationTag_Namespace.png, DS_update.png, 
> YARN-8113.001.patch, YARN-8113.002.patch, YARN-8113.003.patch, 
> YARN-8113.004.patch
>
>
> Once YARN-8013 is done, we will support all type of application namespace 
> types for inter-app constraints, accordingly we need to update the doc. Also 
> make sure API doc will be updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8079) Support specify files to be downloaded (localized) before containers launched by YARN

2018-05-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463279#comment-16463279
 ] 

genericqa commented on YARN-8079:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
56s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8079 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921869/YARN-8079.010.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  

[jira] [Commented] (YARN-7715) Update CPU and Memory cgroups params on container update as well.

2018-05-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463276#comment-16463276
 ] 

genericqa commented on YARN-7715:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 28s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m  
5s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-7715 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921872/YARN-7715.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9f035c4eaedc 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a3b416f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20591/testReport/ |
| Max. process+thread count | 408 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20591/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update CPU and Memory cgroups params on container update as 

[jira] [Commented] (YARN-7894) Improve ATS response for DS_CONTAINER when container launch fails

2018-05-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463272#comment-16463272
 ] 

genericqa commented on YARN-7894:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell:
 The patch generated 1 new + 44 unchanged - 0 fixed = 45 total (was 44) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 10s{color} 
| {color:red} hadoop-yarn-applications-distributedshell in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-7894 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921868/YARN-7894.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 30ee45e00c48 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a3b416f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20589/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
 |
| unit | 

[jira] [Comment Edited] (YARN-8207) Docker container launch use popen have risk of shell expansion

2018-05-03 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463266#comment-16463266
 ] 

Eric Yang edited comment on YARN-8207 at 5/4/18 1:34 AM:
-

[~jlowe] Thank you for the review.  A couple comments:

Args is array of strings.  Null terminator is not required for array when we 
have length of the array.  Hence, checking length > DOCKER_ARGS_MAX is fine.  
Malloc without + 1 for null terminator for char** is okay.  If someone write a 
for loop without using index (length) variable for loop, it could cause 
problems.  Having said that, I will change the code to:

{code}
typedef struct {
int index;
char *out[DOCKER_ARG_MAX];
} args;
{code}

This can be easier to figure out the length of the actual array for other 
developers.  Container-executor is one time execution per exec.  Args is not 
reused, hence, the leak is not happening in practice.  Args is only reused in 
test cases.  I plan to change reset_args to release the pointed strings and 
assign NULL to each pointer rather than freeing the pointers.  free(args); 
would do the actual release of the args structure.

With the above change, I will also change get_docker_*_command to leave args in 
partial state, and let caller decide to reset_args if return value is not 0.


was (Author: eyang):
[~jlowe] Thank you for the review.  A couple comments:

Args is array of strings.  Null terminator is not required for array when we 
have length of the array.  Hence, checking length > DOCKER_ARGS_MAX is fine.  
Malloc without + 1 for null terminator for char** is okay.  If someone write a 
for loop without using index (length) variable for loop, it could cause 
problems.  Having said that, I will change the code to:

{code}
struct args {
int length;
char *out[DOCKER_ARG_MAX];
};
{code}

This can be easier to figure out the length of the actual array for other 
developers.  Container-executor is one time execution per exec.  Args is not 
reused, hence, the leak is not happening in practice.  Args is only reused in 
test cases.  I plan to change reset_args to release the pointed strings and 
assign NULL to each pointer rather than freeing the pointers.  free(args); 
would do the actual release of the args structure.

With the above change, I will also change get_docker_*_command to leave args in 
partial state, and let caller decide to reset_args if return value is not 0.

> Docker container launch use popen have risk of shell expansion
> --
>
> Key: YARN-8207
> URL: https://issues.apache.org/jira/browse/YARN-8207
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.0.0, 3.1.0, 3.0.1, 3.0.2
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8207.001.patch, YARN-8207.002.patch, 
> YARN-8207.003.patch, YARN-8207.004.patch, YARN-8207.005.patch
>
>
> Container-executor code utilize a string buffer to construct docker run 
> command, and pass the string buffer to popen for execution.  Popen spawn a 
> shell to run the command.  Some arguments for docker run are still vulnerable 
> to shell expansion.  The possible solution is to convert from char * buffer 
> to string array for execv to avoid shell expansion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8207) Docker container launch use popen have risk of shell expansion

2018-05-03 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463266#comment-16463266
 ] 

Eric Yang commented on YARN-8207:
-

[~jlowe] Thank you for the review.  A couple comments:

Args is array of strings.  Null terminator is not required for array when we 
have length of the array.  Hence, checking length > DOCKER_ARGS_MAX is fine.  
Malloc without + 1 for null terminator for char** is okay.  If someone write a 
for loop without using index (length) variable for loop, it could cause 
problems.  Having said that, I will change the code to:

{code}
struct args {
int length;
char *out[DOCKER_ARG_MAX];
};
{code}

This can be easier to figure out the length of the actual array for other 
developers.  Container-executor is one time execution per exec.  Args is not 
reused, hence, the leak is not happening in practice.  Args is only reused in 
test cases.  I plan to change reset_args to release the pointed strings and 
assign NULL to each pointer rather than freeing the pointers.  free(args); 
would do the actual release of the args structure.

With the above change, I will also change get_docker_*_command to leave args in 
partial state, and let caller decide to reset_args if return value is not 0.

> Docker container launch use popen have risk of shell expansion
> --
>
> Key: YARN-8207
> URL: https://issues.apache.org/jira/browse/YARN-8207
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.0.0, 3.1.0, 3.0.1, 3.0.2
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8207.001.patch, YARN-8207.002.patch, 
> YARN-8207.003.patch, YARN-8207.004.patch, YARN-8207.005.patch
>
>
> Container-executor code utilize a string buffer to construct docker run 
> command, and pass the string buffer to popen for execution.  Popen spawn a 
> shell to run the command.  Some arguments for docker run are still vulnerable 
> to shell expansion.  The possible solution is to convert from char * buffer 
> to string array for execv to avoid shell expansion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7715) Update CPU and Memory cgroups params on container update as well.

2018-05-03 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7715:
-
Attachment: YARN-7715.001.patch

> Update CPU and Memory cgroups params on container update as well.
> -
>
> Key: YARN-7715
> URL: https://issues.apache.org/jira/browse/YARN-7715
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-7715.000.patch, YARN-7715.001.patch
>
>
> In YARN-6673 and YARN-6674, the cgroups resource handlers update the cgroups 
> params for the containers, based on opportunistic or guaranteed, in the 
> *preStart* method.
> Now that YARN-5085 is in, Container executionType (as well as the cpu, memory 
> and any other resources) can be updated after the container has started. This 
> means we need the ability to change cgroups params after container start.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8244) ContainersLauncher.ContainerLaunch can throw ConcurrentModificationException

2018-05-03 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463247#comment-16463247
 ] 

Miklos Szegedi commented on YARN-8244:
--

This happens with {{TestContainerSchedulerQueuing.testStartMultipleContainers}}.

> ContainersLauncher.ContainerLaunch can throw ConcurrentModificationException
> 
>
> Key: YARN-8244
> URL: https://issues.apache.org/jira/browse/YARN-8244
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Priority: Major
>
> {code:java}
> 2018-05-03 17:31:35,028 WARN [ContainersLauncher #1] launcher.ContainerLaunch 
> (ContainerLaunch.java:call(329)) - Failed to launch container.
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1437)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1471)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1469)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch$ShellScriptBuilder.orderEnvByDependencies(ContainerLaunch.java:1311)
> at 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(ContainerExecutor.java:388)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:290)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:101)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 2{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8244) ContainersLauncher.ContainerLaunch can throw ConcurrentModificationException

2018-05-03 Thread Miklos Szegedi (JIRA)
Miklos Szegedi created YARN-8244:


 Summary: ContainersLauncher.ContainerLaunch can throw 
ConcurrentModificationException
 Key: YARN-8244
 URL: https://issues.apache.org/jira/browse/YARN-8244
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Miklos Szegedi


{code:java}
2018-05-03 17:31:35,028 WARN [ContainersLauncher #1] launcher.ContainerLaunch 
(ContainerLaunch.java:call(329)) - Failed to launch container.
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextNode(HashMap.java:1437)
at java.util.HashMap$EntryIterator.next(HashMap.java:1471)
at java.util.HashMap$EntryIterator.next(HashMap.java:1469)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch$ShellScriptBuilder.orderEnvByDependencies(ContainerLaunch.java:1311)
at 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(ContainerExecutor.java:388)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:290)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:101)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8079) Support specify files to be downloaded (localized) before containers launched by YARN

2018-05-03 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463238#comment-16463238
 ] 

Suma Shivaprasad commented on YARN-8079:


Thanks [~billie.rinaldi]  [~gsaha] [~eyang] for your suggestions. Discussed 
offline with [~billie.rinaldi]  [~gsaha] [~eyang] , STATIC seems to be an 
agreed upon name since the contents of the file are not modified by YARN 
service and is also a known name in the web terminology to indicate that static 
files are served as is without modification on the server.

> Support specify files to be downloaded (localized) before containers launched 
> by YARN
> -
>
> Key: YARN-8079
> URL: https://issues.apache.org/jira/browse/YARN-8079
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: YARN-8079.001.patch, YARN-8079.002.patch, 
> YARN-8079.003.patch, YARN-8079.004.patch, YARN-8079.005.patch, 
> YARN-8079.006.patch, YARN-8079.007.patch, YARN-8079.008.patch, 
> YARN-8079.009.patch, YARN-8079.010.patch
>
>
> Currently, {{srcFile}} is not respected. {{ProviderUtils}} doesn't properly 
> read srcFile, instead it always construct {{remoteFile}} by using 
> componentDir and fileName of {{destFile}}:
> {code}
> Path remoteFile = new Path(compInstanceDir, fileName);
> {code} 
> To me it is a common use case which services have some files existed in HDFS 
> and need to be localized when components get launched. (For example, if we 
> want to serve a Tensorflow model, we need to localize Tensorflow model 
> (typically not huge, less than GB) to local disk. Otherwise launched docker 
> container has to access HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8079) Support specify files to be downloaded (localized) before containers launched by YARN

2018-05-03 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-8079:
---
Attachment: YARN-8079.010.patch

> Support specify files to be downloaded (localized) before containers launched 
> by YARN
> -
>
> Key: YARN-8079
> URL: https://issues.apache.org/jira/browse/YARN-8079
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: YARN-8079.001.patch, YARN-8079.002.patch, 
> YARN-8079.003.patch, YARN-8079.004.patch, YARN-8079.005.patch, 
> YARN-8079.006.patch, YARN-8079.007.patch, YARN-8079.008.patch, 
> YARN-8079.009.patch, YARN-8079.010.patch
>
>
> Currently, {{srcFile}} is not respected. {{ProviderUtils}} doesn't properly 
> read srcFile, instead it always construct {{remoteFile}} by using 
> componentDir and fileName of {{destFile}}:
> {code}
> Path remoteFile = new Path(compInstanceDir, fileName);
> {code} 
> To me it is a common use case which services have some files existed in HDFS 
> and need to be localized when components get launched. (For example, if we 
> want to serve a Tensorflow model, we need to localize Tensorflow model 
> (typically not huge, less than GB) to local disk. Otherwise launched docker 
> container has to access HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7894) Improve ATS response for DS_CONTAINER when container launch fails

2018-05-03 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-7894:

Attachment: YARN-7894.001.patch

> Improve ATS response for DS_CONTAINER when container launch fails
> -
>
> Key: YARN-7894
> URL: https://issues.apache.org/jira/browse/YARN-7894
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Reporter: Charan Hebri
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-7894.001.patch
>
>
> When a distributed shell application starts running and a container launch 
> fails the web service call to the API,
> {noformat}
> http:// address>/ws/v1/timeline/DS_CONTAINER/{noformat}
> return a "Not Found". The message returned in this case should be improved to 
> signify that a container launch failed.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8223) ClassNotFoundException when auxiliary service is loaded from HDFS

2018-05-03 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463229#comment-16463229
 ] 

Zian Chen commented on YARN-8223:
-

Updated patch 002 adding documentation. Hi [~eyang] , could you help review the 
patch? Thanks!

> ClassNotFoundException when auxiliary service is loaded from HDFS
> -
>
> Key: YARN-8223
> URL: https://issues.apache.org/jira/browse/YARN-8223
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Charan Hebri
>Assignee: Zian Chen
>Priority: Blocker
> Attachments: YARN-8223.001.patch, YARN-8223.002.patch
>
>
> Loading an auxiliary jar from a local location on a node manager works as 
> expected,
> {noformat}
> 2018-04-26 15:09:26,179 INFO  util.ApplicationClassLoader 
> (ApplicationClassLoader.java:(98)) - classpath: 
> [file:/grid/0/hadoop/yarn/local/aux-service-local.jar]
> 2018-04-26 15:09:26,179 INFO  util.ApplicationClassLoader 
> (ApplicationClassLoader.java:(99)) - system classes: [java., 
> javax.accessibility., javax.activation., javax.activity., javax.annotation., 
> javax.annotation.processing., javax.crypto., javax.imageio., javax.jws., 
> javax.lang.model., -javax.management.j2ee., javax.management., javax.naming., 
> javax.net., javax.print., javax.rmi., javax.script., 
> -javax.security.auth.message., javax.security.auth., javax.security.cert., 
> javax.security.sasl., javax.sound., javax.sql., javax.swing., javax.tools., 
> javax.transaction., -javax.xml.registry., -javax.xml.rpc., javax.xml., 
> org.w3c.dom., org.xml.sax., org.apache.commons.logging., org.apache.log4j., 
> -org.apache.hadoop.hbase., org.apache.hadoop., core-default.xml, 
> hdfs-default.xml, mapred-default.xml, yarn-default.xml]
> 2018-04-26 15:09:26,181 INFO  containermanager.AuxServices 
> (AuxServices.java:serviceInit(252)) - The aux service:test_aux_local are 
> using the custom classloader
> 2018-04-26 15:09:26,182 WARN  containermanager.AuxServices 
> (AuxServices.java:serviceInit(268)) - The Auxiliary Service named 
> 'test_aux_local' in the configuration is for class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxiliaryServiceWithCustomClassLoader
>  which has a name of 'org.apache.auxtest.AuxServiceFromLocal with custom 
> class loader'. Because these are not the same tools trying to send 
> ServiceData and read Service Meta Data may have issues unless the refer to 
> the name in the config.
> 2018-04-26 15:09:26,182 INFO  containermanager.AuxServices 
> (AuxServices.java:addService(103)) - Adding auxiliary service 
> org.apache.auxtest.AuxServiceFromLocal with custom class loader, 
> "test_aux_local"{noformat}
> But loading the same jar from a location on HDFS fails with a 
> ClassNotFoundException.
> {noformat}
> 018-04-26 15:14:39,683 INFO  util.ApplicationClassLoader 
> (ApplicationClassLoader.java:(98)) - classpath: []
> 2018-04-26 15:14:39,683 INFO  util.ApplicationClassLoader 
> (ApplicationClassLoader.java:(99)) - system classes: [java., 
> javax.accessibility., javax.activation., javax.activity., javax.annotation., 
> javax.annotation.processing., javax.crypto., javax.imageio., javax.jws., 
> javax.lang.model., -javax.management.j2ee., javax.management., javax.naming., 
> javax.net., javax.print., javax.rmi., javax.script., 
> -javax.security.auth.message., javax.security.auth., javax.security.cert., 
> javax.security.sasl., javax.sound., javax.sql., javax.swing., javax.tools., 
> javax.transaction., -javax.xml.registry., -javax.xml.rpc., javax.xml., 
> org.w3c.dom., org.xml.sax., org.apache.commons.logging., org.apache.log4j., 
> -org.apache.hadoop.hbase., org.apache.hadoop., core-default.xml, 
> hdfs-default.xml, mapred-default.xml, yarn-default.xml]
> 2018-04-26 15:14:39,687 INFO  service.AbstractService 
> (AbstractService.java:noteFailure(267)) - Service 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices failed 
> in state INITED
> java.lang.ClassNotFoundException: org.apache.auxtest.AuxServiceFromLocal
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at 
> org.apache.hadoop.util.ApplicationClassLoader.loadClass(ApplicationClassLoader.java:189)
>   at 
> org.apache.hadoop.util.ApplicationClassLoader.loadClass(ApplicationClassLoader.java:157)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxiliaryServiceWithCustomClassLoader.getInstance(AuxiliaryServiceWithCustomClassLoader.java:169)
>   at 
> 

[jira] [Updated] (YARN-8223) ClassNotFoundException when auxiliary service is loaded from HDFS

2018-05-03 Thread Zian Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zian Chen updated YARN-8223:

Attachment: YARN-8223.002.patch

> ClassNotFoundException when auxiliary service is loaded from HDFS
> -
>
> Key: YARN-8223
> URL: https://issues.apache.org/jira/browse/YARN-8223
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Charan Hebri
>Assignee: Zian Chen
>Priority: Blocker
> Attachments: YARN-8223.001.patch, YARN-8223.002.patch
>
>
> Loading an auxiliary jar from a local location on a node manager works as 
> expected,
> {noformat}
> 2018-04-26 15:09:26,179 INFO  util.ApplicationClassLoader 
> (ApplicationClassLoader.java:(98)) - classpath: 
> [file:/grid/0/hadoop/yarn/local/aux-service-local.jar]
> 2018-04-26 15:09:26,179 INFO  util.ApplicationClassLoader 
> (ApplicationClassLoader.java:(99)) - system classes: [java., 
> javax.accessibility., javax.activation., javax.activity., javax.annotation., 
> javax.annotation.processing., javax.crypto., javax.imageio., javax.jws., 
> javax.lang.model., -javax.management.j2ee., javax.management., javax.naming., 
> javax.net., javax.print., javax.rmi., javax.script., 
> -javax.security.auth.message., javax.security.auth., javax.security.cert., 
> javax.security.sasl., javax.sound., javax.sql., javax.swing., javax.tools., 
> javax.transaction., -javax.xml.registry., -javax.xml.rpc., javax.xml., 
> org.w3c.dom., org.xml.sax., org.apache.commons.logging., org.apache.log4j., 
> -org.apache.hadoop.hbase., org.apache.hadoop., core-default.xml, 
> hdfs-default.xml, mapred-default.xml, yarn-default.xml]
> 2018-04-26 15:09:26,181 INFO  containermanager.AuxServices 
> (AuxServices.java:serviceInit(252)) - The aux service:test_aux_local are 
> using the custom classloader
> 2018-04-26 15:09:26,182 WARN  containermanager.AuxServices 
> (AuxServices.java:serviceInit(268)) - The Auxiliary Service named 
> 'test_aux_local' in the configuration is for class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxiliaryServiceWithCustomClassLoader
>  which has a name of 'org.apache.auxtest.AuxServiceFromLocal with custom 
> class loader'. Because these are not the same tools trying to send 
> ServiceData and read Service Meta Data may have issues unless the refer to 
> the name in the config.
> 2018-04-26 15:09:26,182 INFO  containermanager.AuxServices 
> (AuxServices.java:addService(103)) - Adding auxiliary service 
> org.apache.auxtest.AuxServiceFromLocal with custom class loader, 
> "test_aux_local"{noformat}
> But loading the same jar from a location on HDFS fails with a 
> ClassNotFoundException.
> {noformat}
> 018-04-26 15:14:39,683 INFO  util.ApplicationClassLoader 
> (ApplicationClassLoader.java:(98)) - classpath: []
> 2018-04-26 15:14:39,683 INFO  util.ApplicationClassLoader 
> (ApplicationClassLoader.java:(99)) - system classes: [java., 
> javax.accessibility., javax.activation., javax.activity., javax.annotation., 
> javax.annotation.processing., javax.crypto., javax.imageio., javax.jws., 
> javax.lang.model., -javax.management.j2ee., javax.management., javax.naming., 
> javax.net., javax.print., javax.rmi., javax.script., 
> -javax.security.auth.message., javax.security.auth., javax.security.cert., 
> javax.security.sasl., javax.sound., javax.sql., javax.swing., javax.tools., 
> javax.transaction., -javax.xml.registry., -javax.xml.rpc., javax.xml., 
> org.w3c.dom., org.xml.sax., org.apache.commons.logging., org.apache.log4j., 
> -org.apache.hadoop.hbase., org.apache.hadoop., core-default.xml, 
> hdfs-default.xml, mapred-default.xml, yarn-default.xml]
> 2018-04-26 15:14:39,687 INFO  service.AbstractService 
> (AbstractService.java:noteFailure(267)) - Service 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices failed 
> in state INITED
> java.lang.ClassNotFoundException: org.apache.auxtest.AuxServiceFromLocal
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at 
> org.apache.hadoop.util.ApplicationClassLoader.loadClass(ApplicationClassLoader.java:189)
>   at 
> org.apache.hadoop.util.ApplicationClassLoader.loadClass(ApplicationClassLoader.java:157)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxiliaryServiceWithCustomClassLoader.getInstance(AuxiliaryServiceWithCustomClassLoader.java:169)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.serviceInit(AuxServices.java:249)
>   at 
> 

[jira] [Commented] (YARN-7818) Remove privileged operation warnings during container launch for DefaultLinuxContainerRuntime

2018-05-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463206#comment-16463206
 ] 

genericqa commented on YARN-7818:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
40s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-7818 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921848/YARN-7818.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6ff580f97b00 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a3b416f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20586/testReport/ |
| Max. process+thread count | 311 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20586/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Commented] (YARN-7715) Update CPU and Memory cgroups params on container update as well.

2018-05-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463202#comment-16463202
 ] 

genericqa commented on YARN-7715:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 60 unchanged - 0 fixed = 61 total (was 60) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m  5s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-7715 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921663/YARN-7715.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c93995e0f7f7 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a3b416f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20585/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/20585/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20585/testReport/ |
| Max. 

[jira] [Commented] (YARN-8080) YARN native service should support component restart policy

2018-05-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463195#comment-16463195
 ] 

genericqa commented on YARN-8080:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 21s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 61 new + 115 unchanged - 0 fixed = 176 total (was 115) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 44s{color} 
| {color:red} hadoop-yarn-services-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.service.TestServiceAM |
|   | hadoop.yarn.service.monitor.TestServiceMonitor |
|   | hadoop.yarn.service.TestYarnNativeServices |
\\
\\
|| Subsystem || Report/Notes ||

[jira] [Updated] (YARN-4677) RMNodeResourceUpdateEvent update from scheduler can lead to race condition

2018-05-03 Thread Greg Phillips (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Phillips updated YARN-4677:

Attachment: (was: YARN-4677-branch-2.001.patch)

> RMNodeResourceUpdateEvent update from scheduler can lead to race condition
> --
>
> Key: YARN-4677
> URL: https://issues.apache.org/jira/browse/YARN-4677
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful, resourcemanager, scheduler
>Affects Versions: 2.7.1
>Reporter: Brook Zhou
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-4677.01.patch
>
>
> When a node is in decommissioning state, there is time window between 
> completedContainer() and RMNodeResourceUpdateEvent get handled in 
> scheduler.nodeUpdate (YARN-3223). 
> So if a scheduling effort happens within this window, the new container could 
> still get allocated on this node. Even worse case is if scheduling effort 
> happen after RMNodeResourceUpdateEvent sent out but before it is propagated 
> to SchedulerNode - then the total resource is lower than used resource and 
> available resource is a negative value. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4677) RMNodeResourceUpdateEvent update from scheduler can lead to race condition

2018-05-03 Thread Greg Phillips (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Phillips updated YARN-4677:

Attachment: YARN-4677-branch-2.001.patch

> RMNodeResourceUpdateEvent update from scheduler can lead to race condition
> --
>
> Key: YARN-4677
> URL: https://issues.apache.org/jira/browse/YARN-4677
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful, resourcemanager, scheduler
>Affects Versions: 2.7.1
>Reporter: Brook Zhou
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-4677-branch-2.001.patch, YARN-4677.01.patch
>
>
> When a node is in decommissioning state, there is time window between 
> completedContainer() and RMNodeResourceUpdateEvent get handled in 
> scheduler.nodeUpdate (YARN-3223). 
> So if a scheduling effort happens within this window, the new container could 
> still get allocated on this node. Even worse case is if scheduling effort 
> happen after RMNodeResourceUpdateEvent sent out but before it is propagated 
> to SchedulerNode - then the total resource is lower than used resource and 
> available resource is a negative value. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8223) ClassNotFoundException when auxiliary service is loaded from HDFS

2018-05-03 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463114#comment-16463114
 ] 

Zian Chen edited comment on YARN-8223 at 5/3/18 10:42 PM:
--

Really appreciate [~eyang] for helping with testing the patch. To let this part 
easy to be used in the future. Let's add info to the doc.


was (Author: zian chen):
Really appreciate [~eyang] for helping with testing the patch. To let this part 
easy to be used in the future. Let's open a linked document Jira to document 
how to config remote path properly.

> ClassNotFoundException when auxiliary service is loaded from HDFS
> -
>
> Key: YARN-8223
> URL: https://issues.apache.org/jira/browse/YARN-8223
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Charan Hebri
>Assignee: Zian Chen
>Priority: Blocker
> Attachments: YARN-8223.001.patch
>
>
> Loading an auxiliary jar from a local location on a node manager works as 
> expected,
> {noformat}
> 2018-04-26 15:09:26,179 INFO  util.ApplicationClassLoader 
> (ApplicationClassLoader.java:(98)) - classpath: 
> [file:/grid/0/hadoop/yarn/local/aux-service-local.jar]
> 2018-04-26 15:09:26,179 INFO  util.ApplicationClassLoader 
> (ApplicationClassLoader.java:(99)) - system classes: [java., 
> javax.accessibility., javax.activation., javax.activity., javax.annotation., 
> javax.annotation.processing., javax.crypto., javax.imageio., javax.jws., 
> javax.lang.model., -javax.management.j2ee., javax.management., javax.naming., 
> javax.net., javax.print., javax.rmi., javax.script., 
> -javax.security.auth.message., javax.security.auth., javax.security.cert., 
> javax.security.sasl., javax.sound., javax.sql., javax.swing., javax.tools., 
> javax.transaction., -javax.xml.registry., -javax.xml.rpc., javax.xml., 
> org.w3c.dom., org.xml.sax., org.apache.commons.logging., org.apache.log4j., 
> -org.apache.hadoop.hbase., org.apache.hadoop., core-default.xml, 
> hdfs-default.xml, mapred-default.xml, yarn-default.xml]
> 2018-04-26 15:09:26,181 INFO  containermanager.AuxServices 
> (AuxServices.java:serviceInit(252)) - The aux service:test_aux_local are 
> using the custom classloader
> 2018-04-26 15:09:26,182 WARN  containermanager.AuxServices 
> (AuxServices.java:serviceInit(268)) - The Auxiliary Service named 
> 'test_aux_local' in the configuration is for class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxiliaryServiceWithCustomClassLoader
>  which has a name of 'org.apache.auxtest.AuxServiceFromLocal with custom 
> class loader'. Because these are not the same tools trying to send 
> ServiceData and read Service Meta Data may have issues unless the refer to 
> the name in the config.
> 2018-04-26 15:09:26,182 INFO  containermanager.AuxServices 
> (AuxServices.java:addService(103)) - Adding auxiliary service 
> org.apache.auxtest.AuxServiceFromLocal with custom class loader, 
> "test_aux_local"{noformat}
> But loading the same jar from a location on HDFS fails with a 
> ClassNotFoundException.
> {noformat}
> 018-04-26 15:14:39,683 INFO  util.ApplicationClassLoader 
> (ApplicationClassLoader.java:(98)) - classpath: []
> 2018-04-26 15:14:39,683 INFO  util.ApplicationClassLoader 
> (ApplicationClassLoader.java:(99)) - system classes: [java., 
> javax.accessibility., javax.activation., javax.activity., javax.annotation., 
> javax.annotation.processing., javax.crypto., javax.imageio., javax.jws., 
> javax.lang.model., -javax.management.j2ee., javax.management., javax.naming., 
> javax.net., javax.print., javax.rmi., javax.script., 
> -javax.security.auth.message., javax.security.auth., javax.security.cert., 
> javax.security.sasl., javax.sound., javax.sql., javax.swing., javax.tools., 
> javax.transaction., -javax.xml.registry., -javax.xml.rpc., javax.xml., 
> org.w3c.dom., org.xml.sax., org.apache.commons.logging., org.apache.log4j., 
> -org.apache.hadoop.hbase., org.apache.hadoop., core-default.xml, 
> hdfs-default.xml, mapred-default.xml, yarn-default.xml]
> 2018-04-26 15:14:39,687 INFO  service.AbstractService 
> (AbstractService.java:noteFailure(267)) - Service 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices failed 
> in state INITED
> java.lang.ClassNotFoundException: org.apache.auxtest.AuxServiceFromLocal
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at 
> org.apache.hadoop.util.ApplicationClassLoader.loadClass(ApplicationClassLoader.java:189)
>   at 
> org.apache.hadoop.util.ApplicationClassLoader.loadClass(ApplicationClassLoader.java:157)
>   at 

[jira] [Commented] (YARN-7818) Remove privileged operation warnings during container launch for DefaultLinuxContainerRuntime

2018-05-03 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463145#comment-16463145
 ] 

Shane Kumpf commented on YARN-7818:
---

Thanks for the review, [~billie.rinaldi]. I agree that these should be 
consistent. I've attached a new patch to address that change.

> Remove privileged operation warnings during container launch for 
> DefaultLinuxContainerRuntime
> -
>
> Key: YARN-7818
> URL: https://issues.apache.org/jira/browse/YARN-7818
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Shane Kumpf
>Priority: Major
> Attachments: YARN-7818.001.patch, YARN-7818.002.patch
>
>
> steps:
>  1) Run Dshell Application
> {code:java}
> yarn  org.apache.hadoop.yarn.applications.distributedshell.Client -jar 
> /usr/hdp/3.0.0.0-751/hadoop-yarn/hadoop-yarn-applications-distributedshell-*.jar
>  -keep_containers_across_application_attempts -timeout 90 -shell_command 
> "sleep 110" -num_containers 4{code}
> 2) Find out host where AM is running. 
>  3) Find Containers launched by application
>  4) Restart NM where AM is running
>  5) Validate that new attempt is not started and containers launched before 
> restart are in RUNNING state.
> In this test, step#5 fails because containers failed to launch with error 143
> {code:java}
> 2018-01-24 09:48:30,547 INFO  container.ContainerImpl 
> (ContainerImpl.java:handle(2108)) - Container 
> container_e04_1516787230461_0001_01_03 transitioned from RUNNING to 
> KILLING
> 2018-01-24 09:48:30,547 INFO  launcher.ContainerLaunch 
> (ContainerLaunch.java:cleanupContainer(668)) - Cleaning up container 
> container_e04_1516787230461_0001_01_03
> 2018-01-24 09:48:30,552 WARN  privileged.PrivilegedOperationExecutor 
> (PrivilegedOperationExecutor.java:executePrivilegedOperation(174)) - Shell 
> execution returned exit code: 143. Privileged Execution Operation Stderr:
> Stdout: main : command provided 1
> main : run as user is hrt_qa
> main : requested yarn user is hrt_qa
> Getting exit code file...
> Creating script paths...
> Writing pid file...
> Writing to tmp file 
> /grid/0/hadoop/yarn/local/nmPrivate/application_1516787230461_0001/container_e04_1516787230461_0001_01_03/container_e04_1516787230461_0001_01_03.pid.tmp
> Writing to cgroup task files...
> Creating local dirs...
> Launching container...
> Getting exit code file...
> Creating script paths...
> Full command array for failed execution:
> [/usr/hdp/3.0.0.0-751/hadoop-yarn/bin/container-executor, hrt_qa, hrt_qa, 1, 
> application_1516787230461_0001, container_e04_1516787230461_0001_01_03, 
> /grid/0/hadoop/yarn/local/usercache/hrt_qa/appcache/application_1516787230461_0001/container_e04_1516787230461_0001_01_03,
>  
> /grid/0/hadoop/yarn/local/nmPrivate/application_1516787230461_0001/container_e04_1516787230461_0001_01_03/launch_container.sh,
>  
> /grid/0/hadoop/yarn/local/nmPrivate/application_1516787230461_0001/container_e04_1516787230461_0001_01_03/container_e04_1516787230461_0001_01_03.tokens,
>  
> /grid/0/hadoop/yarn/local/nmPrivate/application_1516787230461_0001/container_e04_1516787230461_0001_01_03/container_e04_1516787230461_0001_01_03.pid,
>  /grid/0/hadoop/yarn/local, /grid/0/hadoop/yarn/log, cgroups=none]
> 2018-01-24 09:48:30,553 WARN  runtime.DefaultLinuxContainerRuntime 
> (DefaultLinuxContainerRuntime.java:launchContainer(127)) - Launch container 
> failed. Exception:
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
>  ExitCodeException exitCode=143:
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:180)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime.launchContainer(DefaultLinuxContainerRuntime.java:124)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime.launchContainer(DelegatingLinuxContainerRuntime.java:152)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:549)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:465)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:285)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:95)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> 

[jira] [Updated] (YARN-7818) Remove privileged operation warnings during container launch for DefaultLinuxContainerRuntime

2018-05-03 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-7818:
--
Attachment: YARN-7818.002.patch

> Remove privileged operation warnings during container launch for 
> DefaultLinuxContainerRuntime
> -
>
> Key: YARN-7818
> URL: https://issues.apache.org/jira/browse/YARN-7818
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Shane Kumpf
>Priority: Major
> Attachments: YARN-7818.001.patch, YARN-7818.002.patch
>
>
> steps:
>  1) Run Dshell Application
> {code:java}
> yarn  org.apache.hadoop.yarn.applications.distributedshell.Client -jar 
> /usr/hdp/3.0.0.0-751/hadoop-yarn/hadoop-yarn-applications-distributedshell-*.jar
>  -keep_containers_across_application_attempts -timeout 90 -shell_command 
> "sleep 110" -num_containers 4{code}
> 2) Find out host where AM is running. 
>  3) Find Containers launched by application
>  4) Restart NM where AM is running
>  5) Validate that new attempt is not started and containers launched before 
> restart are in RUNNING state.
> In this test, step#5 fails because containers failed to launch with error 143
> {code:java}
> 2018-01-24 09:48:30,547 INFO  container.ContainerImpl 
> (ContainerImpl.java:handle(2108)) - Container 
> container_e04_1516787230461_0001_01_03 transitioned from RUNNING to 
> KILLING
> 2018-01-24 09:48:30,547 INFO  launcher.ContainerLaunch 
> (ContainerLaunch.java:cleanupContainer(668)) - Cleaning up container 
> container_e04_1516787230461_0001_01_03
> 2018-01-24 09:48:30,552 WARN  privileged.PrivilegedOperationExecutor 
> (PrivilegedOperationExecutor.java:executePrivilegedOperation(174)) - Shell 
> execution returned exit code: 143. Privileged Execution Operation Stderr:
> Stdout: main : command provided 1
> main : run as user is hrt_qa
> main : requested yarn user is hrt_qa
> Getting exit code file...
> Creating script paths...
> Writing pid file...
> Writing to tmp file 
> /grid/0/hadoop/yarn/local/nmPrivate/application_1516787230461_0001/container_e04_1516787230461_0001_01_03/container_e04_1516787230461_0001_01_03.pid.tmp
> Writing to cgroup task files...
> Creating local dirs...
> Launching container...
> Getting exit code file...
> Creating script paths...
> Full command array for failed execution:
> [/usr/hdp/3.0.0.0-751/hadoop-yarn/bin/container-executor, hrt_qa, hrt_qa, 1, 
> application_1516787230461_0001, container_e04_1516787230461_0001_01_03, 
> /grid/0/hadoop/yarn/local/usercache/hrt_qa/appcache/application_1516787230461_0001/container_e04_1516787230461_0001_01_03,
>  
> /grid/0/hadoop/yarn/local/nmPrivate/application_1516787230461_0001/container_e04_1516787230461_0001_01_03/launch_container.sh,
>  
> /grid/0/hadoop/yarn/local/nmPrivate/application_1516787230461_0001/container_e04_1516787230461_0001_01_03/container_e04_1516787230461_0001_01_03.tokens,
>  
> /grid/0/hadoop/yarn/local/nmPrivate/application_1516787230461_0001/container_e04_1516787230461_0001_01_03/container_e04_1516787230461_0001_01_03.pid,
>  /grid/0/hadoop/yarn/local, /grid/0/hadoop/yarn/log, cgroups=none]
> 2018-01-24 09:48:30,553 WARN  runtime.DefaultLinuxContainerRuntime 
> (DefaultLinuxContainerRuntime.java:launchContainer(127)) - Launch container 
> failed. Exception:
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
>  ExitCodeException exitCode=143:
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:180)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime.launchContainer(DefaultLinuxContainerRuntime.java:124)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime.launchContainer(DelegatingLinuxContainerRuntime.java:152)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:549)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:465)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:285)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:95)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> 

[jira] [Updated] (YARN-8243) Flex down should first remove pending container requests (if any) and then kill running containers

2018-05-03 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-8243:


I am continuing to debug this and other issues around flex. Assigning this to 
myself and will work on a patch to fix these. 

> Flex down should first remove pending container requests (if any) and then 
> kill running containers
> --
>
> Key: YARN-8243
> URL: https://issues.apache.org/jira/browse/YARN-8243
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Gour Saha
>Priority: Major
>
> This is easy to test on a service with anti-affinity component, to simulate 
> pending container requests. It can be simulated by other means also (no 
> resource left in cluster, etc.).
> Service yarnfile used to test this -
> {code:java}
> {
>   "name": "sleeper-service",
>   "version": "1",
>   "components" :
>   [
> {
>   "name": "ping",
>   "number_of_containers": 2,
>   "resource": {
> "cpus": 1,
> "memory": "256"
>   },
>   "launch_command": "sleep 9000",
>   "placement_policy": {
> "constraints": [
>   {
> "type": "ANTI_AFFINITY",
> "scope": "NODE",
> "target_tags": [
>   "ping"
> ]
>   }
> ]
>   }
> }
>   ]
> }
> {code}
> Launch a service with the above yarnfile as below -
> {code:java}
> yarn app -launch simple-aa-1 simple_AA.json
> {code}
> Let's assume there are only 5 nodes in this cluster. Now, flex the above 
> service to 1 extra container than the number of nodes (6 in my case).
> {code:java}
> yarn app -flex simple-aa-1 -component ping 6
> {code}
> Only 5 containers will be allocated and running for simple-aa-1. At this 
> point, flex it down to 5 containers -
> {code:java}
> yarn app -flex simple-aa-1 -component ping 5
> {code}
> This is what is seen in the serviceam log at this point -
> {noformat}
> 2018-05-03 20:17:38,469 [IPC Server handler 0 on 38124] INFO  
> service.ClientAMService - Flexing component ping to 5
> 2018-05-03 20:17:38,469 [Component  dispatcher] INFO  component.Component - 
> [FLEX DOWN COMPONENT ping]: scaling down from 6 to 5
> 2018-05-03 20:17:38,470 [Component  dispatcher] INFO  
> instance.ComponentInstance - [COMPINSTANCE ping-4 : 
> container_1525297086734_0013_01_06]: Flexed down by user, destroying.
> 2018-05-03 20:17:38,473 [Component  dispatcher] INFO  component.Component - 
> [COMPONENT ping] Transitioned from FLEXING to STABLE on FLEX event.
> 2018-05-03 20:17:38,474 [pool-5-thread-8] INFO  
> registry.YarnRegistryViewForProviders - [COMPINSTANCE ping-4 : 
> container_1525297086734_0013_01_06]: Deleting registry path 
> /users/root/services/yarn-service/simple-aa-1/components/ctr-1525297086734-0013-01-06
> 2018-05-03 20:17:38,476 [Component  dispatcher] ERROR component.Component - 
> [COMPONENT ping]: Invalid event CHECK_STABLE at STABLE
> org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: 
> CHECK_STABLE at STABLE
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:388)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
>   at 
> org.apache.hadoop.yarn.service.component.Component.handle(Component.java:913)
>   at 
> org.apache.hadoop.yarn.service.ServiceScheduler$ComponentEventHandler.handle(ServiceScheduler.java:574)
>   at 
> org.apache.hadoop.yarn.service.ServiceScheduler$ComponentEventHandler.handle(ServiceScheduler.java:563)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
>   at java.lang.Thread.run(Thread.java:745)
> 2018-05-03 20:17:38,480 [Component  dispatcher] ERROR component.Component - 
> [COMPONENT ping]: Invalid event CHECK_STABLE at STABLE
> org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: 
> CHECK_STABLE at STABLE
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:388)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
>   at 
> 

[jira] [Assigned] (YARN-8243) Flex down should first remove pending container requests (if any) and then kill running containers

2018-05-03 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha reassigned YARN-8243:
---

Assignee: Gour Saha

> Flex down should first remove pending container requests (if any) and then 
> kill running containers
> --
>
> Key: YARN-8243
> URL: https://issues.apache.org/jira/browse/YARN-8243
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
>
> This is easy to test on a service with anti-affinity component, to simulate 
> pending container requests. It can be simulated by other means also (no 
> resource left in cluster, etc.).
> Service yarnfile used to test this -
> {code:java}
> {
>   "name": "sleeper-service",
>   "version": "1",
>   "components" :
>   [
> {
>   "name": "ping",
>   "number_of_containers": 2,
>   "resource": {
> "cpus": 1,
> "memory": "256"
>   },
>   "launch_command": "sleep 9000",
>   "placement_policy": {
> "constraints": [
>   {
> "type": "ANTI_AFFINITY",
> "scope": "NODE",
> "target_tags": [
>   "ping"
> ]
>   }
> ]
>   }
> }
>   ]
> }
> {code}
> Launch a service with the above yarnfile as below -
> {code:java}
> yarn app -launch simple-aa-1 simple_AA.json
> {code}
> Let's assume there are only 5 nodes in this cluster. Now, flex the above 
> service to 1 extra container than the number of nodes (6 in my case).
> {code:java}
> yarn app -flex simple-aa-1 -component ping 6
> {code}
> Only 5 containers will be allocated and running for simple-aa-1. At this 
> point, flex it down to 5 containers -
> {code:java}
> yarn app -flex simple-aa-1 -component ping 5
> {code}
> This is what is seen in the serviceam log at this point -
> {noformat}
> 2018-05-03 20:17:38,469 [IPC Server handler 0 on 38124] INFO  
> service.ClientAMService - Flexing component ping to 5
> 2018-05-03 20:17:38,469 [Component  dispatcher] INFO  component.Component - 
> [FLEX DOWN COMPONENT ping]: scaling down from 6 to 5
> 2018-05-03 20:17:38,470 [Component  dispatcher] INFO  
> instance.ComponentInstance - [COMPINSTANCE ping-4 : 
> container_1525297086734_0013_01_06]: Flexed down by user, destroying.
> 2018-05-03 20:17:38,473 [Component  dispatcher] INFO  component.Component - 
> [COMPONENT ping] Transitioned from FLEXING to STABLE on FLEX event.
> 2018-05-03 20:17:38,474 [pool-5-thread-8] INFO  
> registry.YarnRegistryViewForProviders - [COMPINSTANCE ping-4 : 
> container_1525297086734_0013_01_06]: Deleting registry path 
> /users/root/services/yarn-service/simple-aa-1/components/ctr-1525297086734-0013-01-06
> 2018-05-03 20:17:38,476 [Component  dispatcher] ERROR component.Component - 
> [COMPONENT ping]: Invalid event CHECK_STABLE at STABLE
> org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: 
> CHECK_STABLE at STABLE
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:388)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
>   at 
> org.apache.hadoop.yarn.service.component.Component.handle(Component.java:913)
>   at 
> org.apache.hadoop.yarn.service.ServiceScheduler$ComponentEventHandler.handle(ServiceScheduler.java:574)
>   at 
> org.apache.hadoop.yarn.service.ServiceScheduler$ComponentEventHandler.handle(ServiceScheduler.java:563)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
>   at java.lang.Thread.run(Thread.java:745)
> 2018-05-03 20:17:38,480 [Component  dispatcher] ERROR component.Component - 
> [COMPONENT ping]: Invalid event CHECK_STABLE at STABLE
> org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: 
> CHECK_STABLE at STABLE
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:388)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
>   at 
> 

[jira] [Commented] (YARN-8223) ClassNotFoundException when auxiliary service is loaded from HDFS

2018-05-03 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463120#comment-16463120
 ] 

Eric Yang commented on YARN-8223:
-

[~Zian Chen] Thank you for the patch.  This patch 001 works.  At this time, 
there is no documentation for how to load jar file from HDFS or local for aux 
service.  Could you add some information to:

hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/PluggableShuffleAndPluggableSort.md

Example of loading jar file from HDFS:
{code:java}
    
        yarn.nodemanager.aux-services
        mapreduce_shuffle,AuxServiceFromHDFS
    

    
        
yarn.nodemanager.aux-services.AuxServiceFromHDFS.remote-classpath
        /aux/test/aux-service-hdfs.jar
    

    
        yarn.nodemanager.aux-services.AuxServiceFromHDFS.class
        org.apache.auxtest.AuxServiceFromHDFS2
    {code}
Example of loading jar file from local file system:
{code:java}

        yarn.nodemanager.aux-services
        mapreduce_shuffle,AuxServiceFromHDFS
    

    
        yarn.nodemanager.aux-services.AuxServiceFromHDFS.classpath
        /aux/test/aux-service-hdfs.jar
    

    
        yarn.nodemanager.aux-services.AuxServiceFromHDFS.class
        org.apache.auxtest.AuxServiceFromHDFS2
    {code}
This helps end user to understand the usage of the commands.

> ClassNotFoundException when auxiliary service is loaded from HDFS
> -
>
> Key: YARN-8223
> URL: https://issues.apache.org/jira/browse/YARN-8223
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Charan Hebri
>Assignee: Zian Chen
>Priority: Blocker
> Attachments: YARN-8223.001.patch
>
>
> Loading an auxiliary jar from a local location on a node manager works as 
> expected,
> {noformat}
> 2018-04-26 15:09:26,179 INFO  util.ApplicationClassLoader 
> (ApplicationClassLoader.java:(98)) - classpath: 
> [file:/grid/0/hadoop/yarn/local/aux-service-local.jar]
> 2018-04-26 15:09:26,179 INFO  util.ApplicationClassLoader 
> (ApplicationClassLoader.java:(99)) - system classes: [java., 
> javax.accessibility., javax.activation., javax.activity., javax.annotation., 
> javax.annotation.processing., javax.crypto., javax.imageio., javax.jws., 
> javax.lang.model., -javax.management.j2ee., javax.management., javax.naming., 
> javax.net., javax.print., javax.rmi., javax.script., 
> -javax.security.auth.message., javax.security.auth., javax.security.cert., 
> javax.security.sasl., javax.sound., javax.sql., javax.swing., javax.tools., 
> javax.transaction., -javax.xml.registry., -javax.xml.rpc., javax.xml., 
> org.w3c.dom., org.xml.sax., org.apache.commons.logging., org.apache.log4j., 
> -org.apache.hadoop.hbase., org.apache.hadoop., core-default.xml, 
> hdfs-default.xml, mapred-default.xml, yarn-default.xml]
> 2018-04-26 15:09:26,181 INFO  containermanager.AuxServices 
> (AuxServices.java:serviceInit(252)) - The aux service:test_aux_local are 
> using the custom classloader
> 2018-04-26 15:09:26,182 WARN  containermanager.AuxServices 
> (AuxServices.java:serviceInit(268)) - The Auxiliary Service named 
> 'test_aux_local' in the configuration is for class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxiliaryServiceWithCustomClassLoader
>  which has a name of 'org.apache.auxtest.AuxServiceFromLocal with custom 
> class loader'. Because these are not the same tools trying to send 
> ServiceData and read Service Meta Data may have issues unless the refer to 
> the name in the config.
> 2018-04-26 15:09:26,182 INFO  containermanager.AuxServices 
> (AuxServices.java:addService(103)) - Adding auxiliary service 
> org.apache.auxtest.AuxServiceFromLocal with custom class loader, 
> "test_aux_local"{noformat}
> But loading the same jar from a location on HDFS fails with a 
> ClassNotFoundException.
> {noformat}
> 018-04-26 15:14:39,683 INFO  util.ApplicationClassLoader 
> (ApplicationClassLoader.java:(98)) - classpath: []
> 2018-04-26 15:14:39,683 INFO  util.ApplicationClassLoader 
> (ApplicationClassLoader.java:(99)) - system classes: [java., 
> javax.accessibility., javax.activation., javax.activity., javax.annotation., 
> javax.annotation.processing., javax.crypto., javax.imageio., javax.jws., 
> javax.lang.model., -javax.management.j2ee., javax.management., javax.naming., 
> javax.net., javax.print., javax.rmi., javax.script., 
> -javax.security.auth.message., javax.security.auth., javax.security.cert., 
> javax.security.sasl., javax.sound., javax.sql., javax.swing., javax.tools., 
> javax.transaction., -javax.xml.registry., -javax.xml.rpc., javax.xml., 
> org.w3c.dom., org.xml.sax., org.apache.commons.logging., org.apache.log4j., 
> -org.apache.hadoop.hbase., org.apache.hadoop., core-default.xml, 
> hdfs-default.xml, mapred-default.xml, yarn-default.xml]
> 

[jira] [Commented] (YARN-8163) Add support for Node Labels in opportunistic scheduling.

2018-05-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463115#comment-16463115
 ] 

Íñigo Goiri commented on YARN-8163:
---

I'm not sure if the unit tests in [^YARN-8163.003.patch] cover the PB 
conversion.
Can we double check that the unit test is going all the way to the 
serialization side?
I cannot find explicit unit tests for testing the PB side in trunk so we should 
cover them here.

> Add support for Node Labels in opportunistic scheduling.
> 
>
> Key: YARN-8163
> URL: https://issues.apache.org/jira/browse/YARN-8163
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8163.002.patch, YARN-8163.003.patch, YARN-8163.patch
>
>
> Currently, the Opportunistic Scheduler doesn't honor node labels constraints 
> and schedule containers based on locality and load constraints. This Jira is 
> to add support in opportunistic scheduling to honor node labels in resource 
> requests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8223) ClassNotFoundException when auxiliary service is loaded from HDFS

2018-05-03 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463114#comment-16463114
 ] 

Zian Chen commented on YARN-8223:
-

Really appreciate [~eyang] for helping with testing the patch. To let this part 
easy to be used in the future. Let's open a linked document Jira to document 
how to config remote path properly.

> ClassNotFoundException when auxiliary service is loaded from HDFS
> -
>
> Key: YARN-8223
> URL: https://issues.apache.org/jira/browse/YARN-8223
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Charan Hebri
>Assignee: Zian Chen
>Priority: Blocker
> Attachments: YARN-8223.001.patch
>
>
> Loading an auxiliary jar from a local location on a node manager works as 
> expected,
> {noformat}
> 2018-04-26 15:09:26,179 INFO  util.ApplicationClassLoader 
> (ApplicationClassLoader.java:(98)) - classpath: 
> [file:/grid/0/hadoop/yarn/local/aux-service-local.jar]
> 2018-04-26 15:09:26,179 INFO  util.ApplicationClassLoader 
> (ApplicationClassLoader.java:(99)) - system classes: [java., 
> javax.accessibility., javax.activation., javax.activity., javax.annotation., 
> javax.annotation.processing., javax.crypto., javax.imageio., javax.jws., 
> javax.lang.model., -javax.management.j2ee., javax.management., javax.naming., 
> javax.net., javax.print., javax.rmi., javax.script., 
> -javax.security.auth.message., javax.security.auth., javax.security.cert., 
> javax.security.sasl., javax.sound., javax.sql., javax.swing., javax.tools., 
> javax.transaction., -javax.xml.registry., -javax.xml.rpc., javax.xml., 
> org.w3c.dom., org.xml.sax., org.apache.commons.logging., org.apache.log4j., 
> -org.apache.hadoop.hbase., org.apache.hadoop., core-default.xml, 
> hdfs-default.xml, mapred-default.xml, yarn-default.xml]
> 2018-04-26 15:09:26,181 INFO  containermanager.AuxServices 
> (AuxServices.java:serviceInit(252)) - The aux service:test_aux_local are 
> using the custom classloader
> 2018-04-26 15:09:26,182 WARN  containermanager.AuxServices 
> (AuxServices.java:serviceInit(268)) - The Auxiliary Service named 
> 'test_aux_local' in the configuration is for class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxiliaryServiceWithCustomClassLoader
>  which has a name of 'org.apache.auxtest.AuxServiceFromLocal with custom 
> class loader'. Because these are not the same tools trying to send 
> ServiceData and read Service Meta Data may have issues unless the refer to 
> the name in the config.
> 2018-04-26 15:09:26,182 INFO  containermanager.AuxServices 
> (AuxServices.java:addService(103)) - Adding auxiliary service 
> org.apache.auxtest.AuxServiceFromLocal with custom class loader, 
> "test_aux_local"{noformat}
> But loading the same jar from a location on HDFS fails with a 
> ClassNotFoundException.
> {noformat}
> 018-04-26 15:14:39,683 INFO  util.ApplicationClassLoader 
> (ApplicationClassLoader.java:(98)) - classpath: []
> 2018-04-26 15:14:39,683 INFO  util.ApplicationClassLoader 
> (ApplicationClassLoader.java:(99)) - system classes: [java., 
> javax.accessibility., javax.activation., javax.activity., javax.annotation., 
> javax.annotation.processing., javax.crypto., javax.imageio., javax.jws., 
> javax.lang.model., -javax.management.j2ee., javax.management., javax.naming., 
> javax.net., javax.print., javax.rmi., javax.script., 
> -javax.security.auth.message., javax.security.auth., javax.security.cert., 
> javax.security.sasl., javax.sound., javax.sql., javax.swing., javax.tools., 
> javax.transaction., -javax.xml.registry., -javax.xml.rpc., javax.xml., 
> org.w3c.dom., org.xml.sax., org.apache.commons.logging., org.apache.log4j., 
> -org.apache.hadoop.hbase., org.apache.hadoop., core-default.xml, 
> hdfs-default.xml, mapred-default.xml, yarn-default.xml]
> 2018-04-26 15:14:39,687 INFO  service.AbstractService 
> (AbstractService.java:noteFailure(267)) - Service 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices failed 
> in state INITED
> java.lang.ClassNotFoundException: org.apache.auxtest.AuxServiceFromLocal
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at 
> org.apache.hadoop.util.ApplicationClassLoader.loadClass(ApplicationClassLoader.java:189)
>   at 
> org.apache.hadoop.util.ApplicationClassLoader.loadClass(ApplicationClassLoader.java:157)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at 
> 

[jira] [Commented] (YARN-8080) YARN native service should support component restart policy

2018-05-03 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463104#comment-16463104
 ] 

Suma Shivaprasad commented on YARN-8080:


Attached updated patch with review comments for flex scenarios pending

> YARN native service should support component restart policy
> ---
>
> Key: YARN-8080
> URL: https://issues.apache.org/jira/browse/YARN-8080
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: YARN-8080.001.patch, YARN-8080.002.patch, 
> YARN-8080.003.patch, YARN-8080.005.patch, YARN-8080.006.patch, 
> YARN-8080.007.patch
>
>
> Existing native service assumes the service is long running and never 
> finishes. Containers will be restarted even if exit code == 0. 
> To support boarder use cases, we need to allow restart policy of component 
> specified by users. Propose to have following policies:
> 1) Always: containers always restarted by framework regardless of container 
> exit status. This is existing/default behavior.
> 2) Never: Do not restart containers in any cases after container finishes: To 
> support job-like workload (for example Tensorflow training job). If a task 
> exit with code == 0, we should not restart the task. This can be used by 
> services which is not restart/recovery-able.
> 3) On-failure: Similar to above, only restart task with exitcode != 0. 
> Behaviors after component *instance* finalize (Succeeded or Failed when 
> restart_policy != ALWAYS): 
> 1) For single component, single instance: complete service.
> 2) For single component, multiple instance: other running instances from the 
> same component won't be affected by the finalized component instance. Service 
> will be terminated once all instances finalized. 
> 3) For multiple components: Service will be terminated once all components 
> finalized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8080) YARN native service should support component restart policy

2018-05-03 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-8080:
---
Attachment: YARN-8080.007.patch

> YARN native service should support component restart policy
> ---
>
> Key: YARN-8080
> URL: https://issues.apache.org/jira/browse/YARN-8080
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: YARN-8080.001.patch, YARN-8080.002.patch, 
> YARN-8080.003.patch, YARN-8080.005.patch, YARN-8080.006.patch, 
> YARN-8080.007.patch
>
>
> Existing native service assumes the service is long running and never 
> finishes. Containers will be restarted even if exit code == 0. 
> To support boarder use cases, we need to allow restart policy of component 
> specified by users. Propose to have following policies:
> 1) Always: containers always restarted by framework regardless of container 
> exit status. This is existing/default behavior.
> 2) Never: Do not restart containers in any cases after container finishes: To 
> support job-like workload (for example Tensorflow training job). If a task 
> exit with code == 0, we should not restart the task. This can be used by 
> services which is not restart/recovery-able.
> 3) On-failure: Similar to above, only restart task with exitcode != 0. 
> Behaviors after component *instance* finalize (Succeeded or Failed when 
> restart_policy != ALWAYS): 
> 1) For single component, single instance: complete service.
> 2) For single component, multiple instance: other running instances from the 
> same component won't be affected by the finalized component instance. Service 
> will be terminated once all instances finalized. 
> 3) For multiple components: Service will be terminated once all components 
> finalized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8080) YARN native service should support component restart policy

2018-05-03 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463097#comment-16463097
 ] 

Suma Shivaprasad commented on YARN-8080:


Thanks [~gsaha] for reviews and offline discussions on the patch.  While 
testing the flex scenarios as suggested by [~gsaha] with the patch, ran into 
the following issues.


What does flexing  up/down a component with "restart_policy" : NEVER/ 
"restart_policy: ON_FAILURE mean?

Consider the following scenario where a component has 4 instances configured 
and restart_policy="NEVER". Assume that 2 of these containers have exited 
successfully after execution and 2 are still running.

1. Flex up
Now if the user , flexes the number of containers to 3, should we even support 
flexing up of containers in this case? For eg: It could be a Tensorflow DAG - 
YARN-8135 in which flexing up may or may not make sense unless the Tensorflow 
client needs more resources is able to make use of the newly allocated 
containers  (like the dynamic allocation usecase in SPARK ).  [~leftnoteasy] 
could comment on this. We could add support for a flag in the YARN service spec 
to disallow/allow flexing for services and user can choose to disallow this for 
specific apps.

2. Flex down
Also flex down for such services needs to consider the current number of 
running containers (instead of configured number of containers which is the 
behaviour currently) and scale them down accordingly. For eg: if component 
instance during flex is set to 1, bring down the number of running containers 
to 1.

[~billie.rinaldi] [~leftnoteasy] [~gsaha] [~eyang] Thoughts?









> YARN native service should support component restart policy
> ---
>
> Key: YARN-8080
> URL: https://issues.apache.org/jira/browse/YARN-8080
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: YARN-8080.001.patch, YARN-8080.002.patch, 
> YARN-8080.003.patch, YARN-8080.005.patch, YARN-8080.006.patch
>
>
> Existing native service assumes the service is long running and never 
> finishes. Containers will be restarted even if exit code == 0. 
> To support boarder use cases, we need to allow restart policy of component 
> specified by users. Propose to have following policies:
> 1) Always: containers always restarted by framework regardless of container 
> exit status. This is existing/default behavior.
> 2) Never: Do not restart containers in any cases after container finishes: To 
> support job-like workload (for example Tensorflow training job). If a task 
> exit with code == 0, we should not restart the task. This can be used by 
> services which is not restart/recovery-able.
> 3) On-failure: Similar to above, only restart task with exitcode != 0. 
> Behaviors after component *instance* finalize (Succeeded or Failed when 
> restart_policy != ALWAYS): 
> 1) For single component, single instance: complete service.
> 2) For single component, multiple instance: other running instances from the 
> same component won't be affected by the finalized component instance. Service 
> will be terminated once all instances finalized. 
> 3) For multiple components: Service will be terminated once all components 
> finalized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8207) Docker container launch use popen have risk of shell expansion

2018-05-03 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463048#comment-16463048
 ] 

Jason Lowe commented on YARN-8207:
--

Thanks for updating the patch!  It's looking much better.  I didn't finish 
getting through the patch, but here's what I have so far.

MAX_RETRIES is unused and should be removed.

chosen_container_log_dir and init_log_dir are not used and should be removed.  
In doing so we'll need to go back to freeing container_log_dir directly.

Nit: It would improve readability to use a typedef for the struct so we don't 
have to keep putting "struct" everywhere it's used, e.g.:
{code}
typedef struct args {
  [...]
} args_t;
{code}
or
{code}
typedef struct args args_t;
{code}

Nit: "out" is an odd name for a field in the args struct for the array of 
arguments.  Maybe just "args"?  Similarly maybe "index" should be something 
like "num_args" since that's what it is representing in the structure.

run_docker frees the vector of arguments but not the arguments themselves.

The following comment was updated but the permissions still appear to be 0700 
in practice (and should be all that is required)?
{noformat}
-  // Copy script file with permissions 700
+  // Copy script file with permissions 751
{noformat}

If the {{fork}} fails in launch_docker_container_as_user it would be good to 
print strerror(errno) to the error file so there's a clue as to the nature of 
the error.

Is there a reason not to use stpcpy in flatten?  It would simplify it quite a 
bit and eliminate the pointer arithmetic.

util.c has only whitespace changes.

docker-util.c added limits.h, but I don't think it was necessary.

add_to_args should be checking >= DOCKER_ARGS_MAX otherwise it will allow one 
more arg than the buffer can hold.

add_to_args silently ignores (and leaks) the cloned argument if args->out is 
NULL.  args->out should not be null in practice.  If a null check is deemed 
useful then it should be at the beginning before work is done and return an 
error to indicate the arg was not added.   In any case it should only increment 
the arg count when an argument was added.

free_args should set the argument count back to zero in case someone wants to 
reuse the structure to build another argument list.

free_args should null out each argument pointer after it frees it.  The 
{{args->out[i] = NULL}} statement should be inside the {{for}} loop or it just 
nulls out the element after the last which isn't very useful.

This previous comment still applies.  Even though we are adjusting the index 
the arg is not freed and nulled so it will end up being sent as an argument if 
the args buffer is subsequently used (unless another argument is appended and 
smashes it).
bq. add_param_to_command_if_allowed is trying to reset the index on errors, but 
it fails to re-NULL out the written index values (if there were any). Either we 
should assume all errors are fatal and therefore the buffer doesn't need to be 
reset or the reset logic needs to be fixed.

Why do many of the get_docker_*_command functions smash the args count to zero? 
 IMHO the caller should be responsible for initializing the args structure.  At 
best the get_docker_*_command functions should be calling free_args rather than 
smashing the arg count, otherwise they risk leaking any arguments that were 
filled in.

get_docker_volume_command cleans up the arg structure on error but many other 
get_docker_*_command functions simply return with the args partially 
initialized on error.  This should be consistent.

get_docker_load_command should unconditionally call {{free(docker)}} then check 
the return code for error since both code paths always call {{free(docker)}}.  
Similar comment for get_docker_rm_command, get_docker_stop_command, 
get_docker_kill_command, get_docker_run_command, 

get_docker_kill_command allocates a 40-byte buffer to signal_buffer then 
immediately leaks it.

Why does add_mounts cast string literals to (char*)?  It compiles for me with 
ro_suffix remaining const char*.  If for some reason they need to be char* to 
call make_string then it would be simpler to cast it at the call point rather 
than at each initialization.

Nit: The end of get_mount_source should be simplified to just {{return 
strndup(mount, len);}}

reset_args should call free_args or old arguments will be leaked.

reset_args does not clear the memory that was allocated, so when we try to use 
args->out as an array terminated by a NULL pointer when we call exec it may not 
actually be properly terminated.  It should call calloc instead of malloc.

reset_args needs to allocate DOCKER_ARG_MAX + 1 pointers in order to hold 
DOCKER_ARG_MAX arguments and still leave room for the NULL pointer terminator.

make_string does not check for vsnprintf returning an error.


> Docker container launch use popen have risk of shell expansion
> 

[jira] [Commented] (YARN-8179) Preemption does not happen due to natural_termination_factor when DRF is used

2018-05-03 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463041#comment-16463041
 ] 

Eric Payne commented on YARN-8179:
--

[~sunilg], if there is no objection, I'll commit this tomorrow.

> Preemption does not happen due to natural_termination_factor when DRF is used
> -
>
> Key: YARN-8179
> URL: https://issues.apache.org/jira/browse/YARN-8179
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: kyungwan nam
>Assignee: kyungwan nam
>Priority: Major
> Attachments: YARN-8179.001.patch, YARN-8179.002.patch, 
> YARN-8179.003.patch
>
>
> cluster
> * DominantResourceCalculator
> * QueueA : 50 (capacity) ~ 100 (max capacity)
> * QueueB : 50 (capacity) ~ 50 (max capacity)
> all of resources have been allocated to QueueA. (all Vcores are allocated to 
> QueueA)
> if App1 is submitted to QueueB, over-utilized QueueA should be preempted.
> but, I’ve met the problem, which preemption does not happen. it caused that 
> App1 AM can not allocated.
> when App1 is submitted, pending resources for asking App1 AM would be 
> 
> so, Vcores which need to be preempted from QueueB should be 1.
> but, it can be 0 due to natural_termination_factor (default is 0.2)
> we should guarantee that resources not to be 0 even though applying 
> natural_termination_factor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8243) Flex down should first remove pending container requests (if any) and then kill running containers

2018-05-03 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-8243:

Description: 
This is easy to test on a service with anti-affinity component, to simulate 
pending container requests. It can be simulated by other means also (no 
resource left in cluster, etc.).

Service yarnfile used to test this -
{code:java}
{
  "name": "sleeper-service",
  "version": "1",
  "components" :
  [
{
  "name": "ping",
  "number_of_containers": 2,
  "resource": {
"cpus": 1,
"memory": "256"
  },
  "launch_command": "sleep 9000",
  "placement_policy": {
"constraints": [
  {
"type": "ANTI_AFFINITY",
"scope": "NODE",
"target_tags": [
  "ping"
]
  }
]
  }
}
  ]
}
{code}
Launch a service with the above yarnfile as below -
{code:java}
yarn app -launch simple-aa-1 simple_AA.json
{code}
Let's assume there are only 5 nodes in this cluster. Now, flex the above 
service to 1 extra container than the number of nodes (6 in my case).
{code:java}
yarn app -flex simple-aa-1 -component ping 6
{code}
Only 5 containers will be allocated and running for simple-aa-1. At this point, 
flex it down to 5 containers -
{code:java}
yarn app -flex simple-aa-1 -component ping 5
{code}
This is what is seen in the serviceam log at this point -
{noformat}
2018-05-03 20:17:38,469 [IPC Server handler 0 on 38124] INFO  
service.ClientAMService - Flexing component ping to 5
2018-05-03 20:17:38,469 [Component  dispatcher] INFO  component.Component - 
[FLEX DOWN COMPONENT ping]: scaling down from 6 to 5
2018-05-03 20:17:38,470 [Component  dispatcher] INFO  
instance.ComponentInstance - [COMPINSTANCE ping-4 : 
container_1525297086734_0013_01_06]: Flexed down by user, destroying.
2018-05-03 20:17:38,473 [Component  dispatcher] INFO  component.Component - 
[COMPONENT ping] Transitioned from FLEXING to STABLE on FLEX event.
2018-05-03 20:17:38,474 [pool-5-thread-8] INFO  
registry.YarnRegistryViewForProviders - [COMPINSTANCE ping-4 : 
container_1525297086734_0013_01_06]: Deleting registry path 
/users/root/services/yarn-service/simple-aa-1/components/ctr-1525297086734-0013-01-06
2018-05-03 20:17:38,476 [Component  dispatcher] ERROR component.Component - 
[COMPONENT ping]: Invalid event CHECK_STABLE at STABLE
org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: 
CHECK_STABLE at STABLE
at 
org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:388)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
at 
org.apache.hadoop.yarn.service.component.Component.handle(Component.java:913)
at 
org.apache.hadoop.yarn.service.ServiceScheduler$ComponentEventHandler.handle(ServiceScheduler.java:574)
at 
org.apache.hadoop.yarn.service.ServiceScheduler$ComponentEventHandler.handle(ServiceScheduler.java:563)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
at java.lang.Thread.run(Thread.java:745)
2018-05-03 20:17:38,480 [Component  dispatcher] ERROR component.Component - 
[COMPONENT ping]: Invalid event CHECK_STABLE at STABLE
org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: 
CHECK_STABLE at STABLE
at 
org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:388)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
at 
org.apache.hadoop.yarn.service.component.Component.handle(Component.java:913)
at 
org.apache.hadoop.yarn.service.ServiceScheduler$ComponentEventHandler.handle(ServiceScheduler.java:574)
at 
org.apache.hadoop.yarn.service.ServiceScheduler$ComponentEventHandler.handle(ServiceScheduler.java:563)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
at java.lang.Thread.run(Thread.java:745)
2018-05-03 20:17:38,578 [pool-5-thread-8] INFO  instance.ComponentInstance - 
[COMPINSTANCE ping-4 : container_1525297086734_0013_01_06]: Deleted 
component instance dir: 

[jira] [Created] (YARN-8243) Flex down should first remove pending container requests (if any) and then kill running containers

2018-05-03 Thread Gour Saha (JIRA)
Gour Saha created YARN-8243:
---

 Summary: Flex down should first remove pending container requests 
(if any) and then kill running containers
 Key: YARN-8243
 URL: https://issues.apache.org/jira/browse/YARN-8243
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: yarn-native-services
Affects Versions: 3.1.0
Reporter: Gour Saha


This is easy to test on a service with anti-affinity component, to simulate 
pending container requests. It can be simulated by other means also (no 
resource left in cluster, etc.).

Service yarnfile used to test this -
{code:java}
{
  "name": "sleeper-service",
  "version": "1",
  "components" :
  [
{
  "name": "ping",
  "number_of_containers": 2,
  "resource": {
"cpus": 1,
"memory": "256"
  },
  "launch_command": "sleep 9000",
  "placement_policy": {
"constraints": [
  {
"type": "ANTI_AFFINITY",
"scope": "NODE",
"target_tags": [
  "ping"
]
  }
]
  }
}
  ]
}
{code}
Launch a service with the above yarnfile as below -
{code:java}
yarn app -launch simple-aa-1 simple_AA.json
{code}
Let's assume there are only 5 nodes in this cluster. Now, flex the above 
service to 1 extra container than the number of nodes (6 in my case).
{code:java}
yarn app -flex simple-aa-1 -component ping 6
{code}
Only 5 containers will be allocated and running for simple-aa-1. At this point, 
flex it down to 5 containers -
{code:java}
yarn app -flex simple-aa-1 -component ping 5
{code}
This is what is seen in the serviceam log at this point -
{code:java}
2018-05-03 20:17:38,469 [IPC Server handler 0 on 38124] INFO  
service.ClientAMService - Flexing component ping to 5
2018-05-03 20:17:38,469 [Component  dispatcher] INFO  component.Component - 
[FLEX DOWN COMPONENT ping]: scaling down from 6 to 5
2018-05-03 20:17:38,470 [Component  dispatcher] INFO  
instance.ComponentInstance - [COMPINSTANCE ping-4 : 
container_1525297086734_0013_01_06]: Flexed down by user, destroying.
2018-05-03 20:17:38,473 [Component  dispatcher] INFO  component.Component - 
[COMPONENT ping] Transitioned from FLEXING to STABLE on FLEX event.
2018-05-03 20:17:38,474 [pool-5-thread-8] INFO  
registry.YarnRegistryViewForProviders - [COMPINSTANCE ping-4 : 
container_1525297086734_0013_01_06]: Deleting registry path 
/users/root/services/yarn-service/simple-aa-1/components/ctr-1525297086734-0013-01-06
2018-05-03 20:17:38,476 [Component  dispatcher] ERROR component.Component - 
[COMPONENT ping]: Invalid event CHECK_STABLE at STABLE
org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: 
CHECK_STABLE at STABLE
at 
org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:388)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
at 
org.apache.hadoop.yarn.service.component.Component.handle(Component.java:913)
at 
org.apache.hadoop.yarn.service.ServiceScheduler$ComponentEventHandler.handle(ServiceScheduler.java:574)
at 
org.apache.hadoop.yarn.service.ServiceScheduler$ComponentEventHandler.handle(ServiceScheduler.java:563)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
at java.lang.Thread.run(Thread.java:745)
2018-05-03 20:17:38,480 [Component  dispatcher] ERROR component.Component - 
[COMPONENT ping]: Invalid event CHECK_STABLE at STABLE
org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: 
CHECK_STABLE at STABLE
at 
org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:388)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
at 
org.apache.hadoop.yarn.service.component.Component.handle(Component.java:913)
at 
org.apache.hadoop.yarn.service.ServiceScheduler$ComponentEventHandler.handle(ServiceScheduler.java:574)
at 
org.apache.hadoop.yarn.service.ServiceScheduler$ComponentEventHandler.handle(ServiceScheduler.java:563)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
at 

[jira] [Commented] (YARN-8079) Support specify files to be downloaded (localized) before containers launched by YARN

2018-05-03 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462973#comment-16462973
 ] 

Gour Saha commented on YARN-8079:
-

Or maybe call it SIMPLE

> Support specify files to be downloaded (localized) before containers launched 
> by YARN
> -
>
> Key: YARN-8079
> URL: https://issues.apache.org/jira/browse/YARN-8079
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: YARN-8079.001.patch, YARN-8079.002.patch, 
> YARN-8079.003.patch, YARN-8079.004.patch, YARN-8079.005.patch, 
> YARN-8079.006.patch, YARN-8079.007.patch, YARN-8079.008.patch, 
> YARN-8079.009.patch
>
>
> Currently, {{srcFile}} is not respected. {{ProviderUtils}} doesn't properly 
> read srcFile, instead it always construct {{remoteFile}} by using 
> componentDir and fileName of {{destFile}}:
> {code}
> Path remoteFile = new Path(compInstanceDir, fileName);
> {code} 
> To me it is a common use case which services have some files existed in HDFS 
> and need to be localized when components get launched. (For example, if we 
> want to serve a Tensorflow model, we need to localize Tensorflow model 
> (typically not huge, less than GB) to local disk. Otherwise launched docker 
> container has to access HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8163) Add support for Node Labels in opportunistic scheduling.

2018-05-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462972#comment-16462972
 ] 

genericqa commented on YARN-8163:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
10s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 68m 
49s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8163 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921799/YARN-8163.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 86ba22d1112a 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7fe3214 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 

[jira] [Commented] (YARN-8079) Support specify files to be downloaded (localized) before containers launched by YARN

2018-05-03 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462928#comment-16462928
 ] 

Eric Yang commented on YARN-8079:
-

[~gsaha] PLAIN is usually used as reference to text file, but this feature 
applies to files in general.  I am ok to go with FILE, if most people don't 
find "files": \{ "type": "FILE", ...} wordy or confusing.  Maybe we default 
type to FILE, if "type" has not be explicitly defined.

> Support specify files to be downloaded (localized) before containers launched 
> by YARN
> -
>
> Key: YARN-8079
> URL: https://issues.apache.org/jira/browse/YARN-8079
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: YARN-8079.001.patch, YARN-8079.002.patch, 
> YARN-8079.003.patch, YARN-8079.004.patch, YARN-8079.005.patch, 
> YARN-8079.006.patch, YARN-8079.007.patch, YARN-8079.008.patch, 
> YARN-8079.009.patch
>
>
> Currently, {{srcFile}} is not respected. {{ProviderUtils}} doesn't properly 
> read srcFile, instead it always construct {{remoteFile}} by using 
> componentDir and fileName of {{destFile}}:
> {code}
> Path remoteFile = new Path(compInstanceDir, fileName);
> {code} 
> To me it is a common use case which services have some files existed in HDFS 
> and need to be localized when components get launched. (For example, if we 
> want to serve a Tensorflow model, we need to localize Tensorflow model 
> (typically not huge, less than GB) to local disk. Otherwise launched docker 
> container has to access HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8231) Dshell application fails when one of the docker container gets killed

2018-05-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-8231:
--
Component/s: (was: yarn-native-services)

> Dshell application fails when one of the docker container gets killed
> -
>
> Key: YARN-8231
> URL: https://issues.apache.org/jira/browse/YARN-8231
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Priority: Critical
>
> 1) Launch dshell application
> {code}
> yarn  jar hadoop-yarn-applications-distributedshell-*.jar  -shell_command 
> "sleep 300" -num_containers 2 -shell_env YARN_CONTAINER_RUNTIME_TYPE=docker 
> -shell_env YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=centos/httpd-24-centos7:latest 
> -keep_containers_across_application_attempts -jar 
> /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell-*.jar{code}
> 2) Kill container_1524681858728_0012_01_02
> Expected behavior:
> Application should start new instance and finish successfully
> Actual behavior:
> Application Failed as soon as container was killed
> {code:title=AM log}
> 18/04/27 23:05:12 INFO distributedshell.ApplicationMaster: Got response from 
> RM for container ask, completedCnt=1
> 18/04/27 23:05:12 INFO distributedshell.ApplicationMaster: 
> appattempt_1524681858728_0012_01 got container status for 
> containerID=container_1524681858728_0012_01_02, state=COMPLETE, 
> exitStatus=137, diagnostics=[2018-04-27 23:05:09.310]Container killed on 
> request. Exit code is 137
> [2018-04-27 23:05:09.331]Container exited with a non-zero exit code 137. 
> [2018-04-27 23:05:09.332]Killed by external signal
> 18/04/27 23:08:46 INFO distributedshell.ApplicationMaster: Got response from 
> RM for container ask, completedCnt=1
> 18/04/27 23:08:46 INFO distributedshell.ApplicationMaster: 
> appattempt_1524681858728_0012_01 got container status for 
> containerID=container_1524681858728_0012_01_03, state=COMPLETE, 
> exitStatus=0, diagnostics=
> 18/04/27 23:08:46 INFO distributedshell.ApplicationMaster: Container 
> completed successfully., containerId=container_1524681858728_0012_01_03
> 18/04/27 23:08:46 INFO distributedshell.ApplicationMaster: Application 
> completed. Stopping running containers
> 18/04/27 23:08:46 INFO distributedshell.ApplicationMaster: Application 
> completed. Signalling finish to RM
> 18/04/27 23:08:46 INFO distributedshell.ApplicationMaster: Diagnostics., 
> total=2, completed=2, allocated=2, failed=1
> 18/04/27 23:08:46 INFO impl.AMRMClientImpl: Waiting for application to be 
> successfully unregistered.{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8226) Improve anti-affinity section description in YARN Service API doc

2018-05-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462884#comment-16462884
 ] 

Hudson commented on YARN-8226:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14120 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14120/])
YARN-8226. Improved anti-affinity description in YARN Service doc.   
(eyang: rev 7698737207b01e80b1be2b4df60363f952a1c2b4)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/YarnServiceAPI.md
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/resources/definition/YARN-Services-Examples.md


> Improve anti-affinity section description in YARN Service API doc
> -
>
> Key: YARN-8226
> URL: https://issues.apache.org/jira/browse/YARN-8226
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: docs, documentation
>Reporter: Charan Hebri
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-8226.01.patch
>
>
> The anti-affinity section in the YARN services doc says,
> {noformat}
> Note, that the 3 containers will come up on 3 different nodes. If there are 
> less than 3 NMs running in the cluster, then all 3 container requests will 
> not be fulfilled and the service will be in non-STABLE state.{noformat}
> Based on the description the expected behavior for the case of number of NMs 
> less than the number of containers requested isn't very obvious. Opening 
> issue to improve the comment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8163) Add support for Node Labels in opportunistic scheduling.

2018-05-03 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462881#comment-16462881
 ] 

Giovanni Matteo Fumarola edited comment on YARN-8163 at 5/3/18 6:08 PM:


+1 on v3. Waiting on Jenkins.


was (Author: giovanni.fumarola):
+1 on v3. Waiting on Jerkins.

> Add support for Node Labels in opportunistic scheduling.
> 
>
> Key: YARN-8163
> URL: https://issues.apache.org/jira/browse/YARN-8163
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8163.002.patch, YARN-8163.003.patch, YARN-8163.patch
>
>
> Currently, the Opportunistic Scheduler doesn't honor node labels constraints 
> and schedule containers based on locality and load constraints. This Jira is 
> to add support in opportunistic scheduling to honor node labels in resource 
> requests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8163) Add support for Node Labels in opportunistic scheduling.

2018-05-03 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462881#comment-16462881
 ] 

Giovanni Matteo Fumarola commented on YARN-8163:


+1 on v3. Waiting on Jerkins.

> Add support for Node Labels in opportunistic scheduling.
> 
>
> Key: YARN-8163
> URL: https://issues.apache.org/jira/browse/YARN-8163
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8163.002.patch, YARN-8163.003.patch, YARN-8163.patch
>
>
> Currently, the Opportunistic Scheduler doesn't honor node labels constraints 
> and schedule containers based on locality and load constraints. This Jira is 
> to add support in opportunistic scheduling to honor node labels in resource 
> requests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8079) Support specify files to be downloaded (localized) before containers launched by YARN

2018-05-03 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462876#comment-16462876
 ] 

Gour Saha commented on YARN-8079:
-

I agree that STATIC is not fitting well. I like FILE. Does PLAIN fit the 
profile better?

[~suma.shivaprasad] few minor comments -
1. Please remove the unnecessary import and new line changes in Component.java 
and TestYarnNativeServices.java.
2. In the description of type for ConfigFile in YarnServiceAPI.md make 
"STATIC/ARCHIVE" lowercase to be in-line with the others. Also put a period at 
the end of the sentence.

> Support specify files to be downloaded (localized) before containers launched 
> by YARN
> -
>
> Key: YARN-8079
> URL: https://issues.apache.org/jira/browse/YARN-8079
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: YARN-8079.001.patch, YARN-8079.002.patch, 
> YARN-8079.003.patch, YARN-8079.004.patch, YARN-8079.005.patch, 
> YARN-8079.006.patch, YARN-8079.007.patch, YARN-8079.008.patch, 
> YARN-8079.009.patch
>
>
> Currently, {{srcFile}} is not respected. {{ProviderUtils}} doesn't properly 
> read srcFile, instead it always construct {{remoteFile}} by using 
> componentDir and fileName of {{destFile}}:
> {code}
> Path remoteFile = new Path(compInstanceDir, fileName);
> {code} 
> To me it is a common use case which services have some files existed in HDFS 
> and need to be localized when components get launched. (For example, if we 
> want to serve a Tensorflow model, we need to localize Tensorflow model 
> (typically not huge, less than GB) to local disk. Otherwise launched docker 
> container has to access HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8163) Add support for Node Labels in opportunistic scheduling.

2018-05-03 Thread Abhishek Modi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462874#comment-16462874
 ] 

Abhishek Modi commented on YARN-8163:
-

Submitted v3 patch fixing CR comments. [~giovanni.fumarola] regarding 
{quote}In _OpportunisticContainerAllocatorAMService#allocate_ 
_partitionedAsks.getOpportunistic()_ can be null. Please add null pointer check.
{quote}
I think it should never be null as we are always initializing opportunistic 
with an empty ArrayList.

> Add support for Node Labels in opportunistic scheduling.
> 
>
> Key: YARN-8163
> URL: https://issues.apache.org/jira/browse/YARN-8163
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8163.002.patch, YARN-8163.003.patch, YARN-8163.patch
>
>
> Currently, the Opportunistic Scheduler doesn't honor node labels constraints 
> and schedule containers based on locality and load constraints. This Jira is 
> to add support in opportunistic scheduling to honor node labels in resource 
> requests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8079) Support specify files to be downloaded (localized) before containers launched by YARN

2018-05-03 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462872#comment-16462872
 ] 

Eric Yang commented on YARN-8079:
-

[~billie.rinaldi] STATIC works better in my opinion.  It helps to differentiate 
between template file and static file.  It also helps to have a slight 
different word from files because the upper level closure is already using 
keyword "files".

> Support specify files to be downloaded (localized) before containers launched 
> by YARN
> -
>
> Key: YARN-8079
> URL: https://issues.apache.org/jira/browse/YARN-8079
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: YARN-8079.001.patch, YARN-8079.002.patch, 
> YARN-8079.003.patch, YARN-8079.004.patch, YARN-8079.005.patch, 
> YARN-8079.006.patch, YARN-8079.007.patch, YARN-8079.008.patch, 
> YARN-8079.009.patch
>
>
> Currently, {{srcFile}} is not respected. {{ProviderUtils}} doesn't properly 
> read srcFile, instead it always construct {{remoteFile}} by using 
> componentDir and fileName of {{destFile}}:
> {code}
> Path remoteFile = new Path(compInstanceDir, fileName);
> {code} 
> To me it is a common use case which services have some files existed in HDFS 
> and need to be localized when components get launched. (For example, if we 
> want to serve a Tensorflow model, we need to localize Tensorflow model 
> (typically not huge, less than GB) to local disk. Otherwise launched docker 
> container has to access HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8079) Support specify files to be downloaded (localized) before containers launched by YARN

2018-05-03 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462859#comment-16462859
 ] 

Billie Rinaldi commented on YARN-8079:
--

I tested patch 9 and it is working fine, so I think this is about ready for 
commit. One last question for everyone, is STATIC a good name for the file type 
that is localized without changes, or should we consider a different name (e.g. 
FILE to match the local resource type)?

> Support specify files to be downloaded (localized) before containers launched 
> by YARN
> -
>
> Key: YARN-8079
> URL: https://issues.apache.org/jira/browse/YARN-8079
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: YARN-8079.001.patch, YARN-8079.002.patch, 
> YARN-8079.003.patch, YARN-8079.004.patch, YARN-8079.005.patch, 
> YARN-8079.006.patch, YARN-8079.007.patch, YARN-8079.008.patch, 
> YARN-8079.009.patch
>
>
> Currently, {{srcFile}} is not respected. {{ProviderUtils}} doesn't properly 
> read srcFile, instead it always construct {{remoteFile}} by using 
> componentDir and fileName of {{destFile}}:
> {code}
> Path remoteFile = new Path(compInstanceDir, fileName);
> {code} 
> To me it is a common use case which services have some files existed in HDFS 
> and need to be localized when components get launched. (For example, if we 
> want to serve a Tensorflow model, we need to localize Tensorflow model 
> (typically not huge, less than GB) to local disk. Otherwise launched docker 
> container has to access HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7961) Improve status response when yarn application is destroyed

2018-05-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462857#comment-16462857
 ] 

Hudson commented on YARN-7961:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14119 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14119/])
YARN-7961. Improve status message for YARN service.(eyang: rev 
7fe3214d4bb810c0da18dd936875b4e2588ba518)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/webapp/ApiServer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/ServiceClientTest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/TestApiServer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/client/TestApiServiceClient.java


> Improve status response when yarn application is destroyed
> --
>
> Key: YARN-7961
> URL: https://issues.apache.org/jira/browse/YARN-7961
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Gour Saha
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-7961.01.patch
>
>
> Yarn should provide some way to figure out if yarn service is destroyed.
> If yarn service application is stopped, "yarn app -status " shows 
> that service is Stopped. 
> After destroying yarn service, "yarn app -status " returns 404
> {code}
> [hdpuser@cn005 sleeper]$ yarn app -status yesha-sleeper
> WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
> YARN_LOG_DIR.
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
> 18/02/16 11:02:30 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/02/16 11:02:31 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.xx:8050
> 18/02/16 11:02:31 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.x:10200
> 18/02/16 11:02:31 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.x:8050
> 18/02/16 11:02:31 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.x:10200
> 18/02/16 11:02:31 INFO util.log: Logging initialized @2075ms
> yesha-sleeper Failed : HTTP error code : 404
> {code}
> Yarn should be able to notify user that whether a certain app is destroyed or 
> never created. HTTP 404 error does not explicitly provide information.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8201) Skip stacktrace of ApplicationNotFoundException at server side

2018-05-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462850#comment-16462850
 ] 

genericqa commented on YARN-8201:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 68m 
10s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8201 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921786/YARN-8201-002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1719e55e1df1 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ee2ce92 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/20582/artifact/out/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20582/testReport/ |
| Max. process+thread count | 887 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Resolved] (YARN-8231) Dshell application fails when one of the docker container gets killed

2018-05-03 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh resolved YARN-8231.
-
Resolution: Invalid

# Distributed shell application doesn't re-launch containers when it gets 
container completed event from Node Manager.
 # To enable NM retry failed containers, additional configs need to be 
provided. For eg. {{container_retry_policy}} and {{container_max_retries}}
# Force killing a container, that is, exit code 137 will not trigger a retry. 
{code}
  @Override
  public boolean shouldRetry(int errorCode) {
if (errorCode == ExitCode.SUCCESS.getExitCode()
|| errorCode == ExitCode.FORCE_KILLED.getExitCode()
|| errorCode == ExitCode.TERMINATED.getExitCode()) {
  return false;
}
return retryPolicy.shouldRetry(windowRetryContext, errorCode);
  }
{code}

> Dshell application fails when one of the docker container gets killed
> -
>
> Key: YARN-8231
> URL: https://issues.apache.org/jira/browse/YARN-8231
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Priority: Critical
>
> 1) Launch dshell application
> {code}
> yarn  jar hadoop-yarn-applications-distributedshell-*.jar  -shell_command 
> "sleep 300" -num_containers 2 -shell_env YARN_CONTAINER_RUNTIME_TYPE=docker 
> -shell_env YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=centos/httpd-24-centos7:latest 
> -keep_containers_across_application_attempts -jar 
> /usr/hdp/current/hadoop-yarn-client/hadoop-yarn-applications-distributedshell-*.jar{code}
> 2) Kill container_1524681858728_0012_01_02
> Expected behavior:
> Application should start new instance and finish successfully
> Actual behavior:
> Application Failed as soon as container was killed
> {code:title=AM log}
> 18/04/27 23:05:12 INFO distributedshell.ApplicationMaster: Got response from 
> RM for container ask, completedCnt=1
> 18/04/27 23:05:12 INFO distributedshell.ApplicationMaster: 
> appattempt_1524681858728_0012_01 got container status for 
> containerID=container_1524681858728_0012_01_02, state=COMPLETE, 
> exitStatus=137, diagnostics=[2018-04-27 23:05:09.310]Container killed on 
> request. Exit code is 137
> [2018-04-27 23:05:09.331]Container exited with a non-zero exit code 137. 
> [2018-04-27 23:05:09.332]Killed by external signal
> 18/04/27 23:08:46 INFO distributedshell.ApplicationMaster: Got response from 
> RM for container ask, completedCnt=1
> 18/04/27 23:08:46 INFO distributedshell.ApplicationMaster: 
> appattempt_1524681858728_0012_01 got container status for 
> containerID=container_1524681858728_0012_01_03, state=COMPLETE, 
> exitStatus=0, diagnostics=
> 18/04/27 23:08:46 INFO distributedshell.ApplicationMaster: Container 
> completed successfully., containerId=container_1524681858728_0012_01_03
> 18/04/27 23:08:46 INFO distributedshell.ApplicationMaster: Application 
> completed. Stopping running containers
> 18/04/27 23:08:46 INFO distributedshell.ApplicationMaster: Application 
> completed. Signalling finish to RM
> 18/04/27 23:08:46 INFO distributedshell.ApplicationMaster: Diagnostics., 
> total=2, completed=2, allocated=2, failed=1
> 18/04/27 23:08:46 INFO impl.AMRMClientImpl: Waiting for application to be 
> successfully unregistered.{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7961) Improve status response when yarn application is destroyed

2018-05-03 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7961:

Fix Version/s: 3.1.1
   3.2.0

> Improve status response when yarn application is destroyed
> --
>
> Key: YARN-7961
> URL: https://issues.apache.org/jira/browse/YARN-7961
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Gour Saha
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-7961.01.patch
>
>
> Yarn should provide some way to figure out if yarn service is destroyed.
> If yarn service application is stopped, "yarn app -status " shows 
> that service is Stopped. 
> After destroying yarn service, "yarn app -status " returns 404
> {code}
> [hdpuser@cn005 sleeper]$ yarn app -status yesha-sleeper
> WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
> YARN_LOG_DIR.
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
> 18/02/16 11:02:30 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/02/16 11:02:31 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.xx:8050
> 18/02/16 11:02:31 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.x:10200
> 18/02/16 11:02:31 INFO client.RMProxy: Connecting to ResourceManager at 
> xxx/xx.xx.xx.x:8050
> 18/02/16 11:02:31 INFO client.AHSProxy: Connecting to Application History 
> server at xxx/xx.xx.xx.x:10200
> 18/02/16 11:02:31 INFO util.log: Logging initialized @2075ms
> yesha-sleeper Failed : HTTP error code : 404
> {code}
> Yarn should be able to notify user that whether a certain app is destroyed or 
> never created. HTTP 404 error does not explicitly provide information.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8163) Add support for Node Labels in opportunistic scheduling.

2018-05-03 Thread Abhishek Modi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-8163:

Attachment: YARN-8163.003.patch

> Add support for Node Labels in opportunistic scheduling.
> 
>
> Key: YARN-8163
> URL: https://issues.apache.org/jira/browse/YARN-8163
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8163.002.patch, YARN-8163.003.patch, YARN-8163.patch
>
>
> Currently, the Opportunistic Scheduler doesn't honor node labels constraints 
> and schedule containers based on locality and load constraints. This Jira is 
> to add support in opportunistic scheduling to honor node labels in resource 
> requests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7933) [atsv2 read acls] Add TimelineWriter#writeDomain

2018-05-03 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462813#comment-16462813
 ] 

Rohith Sharma K S commented on YARN-7933:
-

{quote} Does the patch require a domain id for every timelineEntity? Or what 
happens if no domain id set?
{quote}
This patch is only for storing domain entity into table. For storing in 
TimelineEntity, we should discuss it separately because we are still not 
decided that should we store TimelineDomain as denormalized form or just domain 
id in entity table.
{quote}The /domain endpoint takes appId as a query param. Is it the same as the 
app id in the TimelineCollector context?
{quote}
Yes, it should be same as TimelineCollector context only. 
TimelineV2Client#putDomain is still with in the scope of appId.
{quote}Does it make sense in this patch to add in memory cache of domain in 
TimelineCollector? Given the patch deals with the write path, probably so.
{quote}
As I mentioned in 1st reply, we need to discuss and get conses how are we going 
to store timelineDomain in entity table. There are both pros and cons are 
there. So I restricted this patch only to store timelineDomain. I guess there 
is separate JIRA for cache implementation.
{quote}In HBaseTimelineWriterImpl, null-checking is done solely for clusterId. 
Do we need to check for domainId as well?
{quote}
This could be done. I assumed that domainId will not be empty. Similar to 
entityId in TimelineEntity class.

> [atsv2 read acls] Add TimelineWriter#writeDomain 
> -
>
> Key: YARN-7933
> URL: https://issues.apache.org/jira/browse/YARN-7933
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-7933.01.patch, YARN-7933.02.patch, 
> YARN-7933.03.patch, YARN-7933.04.patch
>
>
>  
> Add an API TimelineWriter#writeDomain for writing the domain info 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8194) Exception when reinitializing a container using LinuxContainerExecutor

2018-05-03 Thread Chandni Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462776#comment-16462776
 ] 

Chandni Singh commented on YARN-8194:
-

Thanks [~shaneku...@gmail.com] and [~eyang] for the reviewing and merging.

> Exception when reinitializing a container using LinuxContainerExecutor
> --
>
> Key: YARN-8194
> URL: https://issues.apache.org/jira/browse/YARN-8194
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Blocker
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8194.001.patch
>
>
> When a component instance is upgraded and the container executor is set to 
> {{org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor}}, then 
> the following exception is seen in the nodemanager:
> {code}
> Writing to cgroup task files...
> Creating local dirs...
> Can't open 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
>  for output - File exists
> Getting exit code file...
> Creating script paths...
> Full command array for failed execution:
> [/usr/local/hadoop-3.2.0-SNAPSHOT/bin/container-executor, hbase, hbase, 1, 
> application_1524242413029_0001, container_1524242413029_0001_01_02, 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/container_1524242413029_0001_01_02.tokens,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/container_1524242413029_0001_01_02.pid,
>  /tmp/hadoop-yarn/nm-local-dir, 
> /usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs, cgroups=none]
> 2018-04-20 16:50:16,641 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime:
>  Launch container failed. Exception:
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
>  ExitCodeException exitCode=33: Could not create copy file 3 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
> Could not create local files and directories
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:180)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime.launchContainer(DefaultLinuxContainerRuntime.java:118)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime.launchContainer(DelegatingLinuxContainerRuntime.java:141)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.handleLaunchForLaunchType(LinuxContainerExecutor.java:562)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:477)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:492)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:304)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:101)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: ExitCodeException exitCode=33: Could not create copy file 3 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
> Could not create local files and directories
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:1009)
> at org.apache.hadoop.util.Shell.run(Shell.java:902)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1227)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:152)
> ... 11 more
> 

[jira] [Commented] (YARN-7973) Support ContainerRelaunch for Docker containers

2018-05-03 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462772#comment-16462772
 ] 

Eric Yang commented on YARN-7973:
-

Addendum patch 001 committed to branch-3.1. 

> Support ContainerRelaunch for Docker containers
> ---
>
> Key: YARN-7973
> URL: https://issues.apache.org/jira/browse/YARN-7973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>  Labels: Docker
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-7973-branch-3.1.addendum.001.patch, 
> YARN-7973.001.patch, YARN-7973.002.patch, YARN-7973.003.patch, 
> YARN-7973.004.patch
>
>
> Prior to YARN-5366, {{container-executor}} would remove the Docker container 
> when it exited. The removal is now handled by the 
> {{DockerLinuxContainerRuntime}}. {{ContainerRelaunch}} is intended to reuse 
> the workdir from the previous attempt, and does not call {{cleanupContainer}} 
> prior to {{launchContainer}}. The container ID is reused as well. As a 
> result, the previous Docker container still exists, resulting in an error 
> from Docker indicating the a container by that name already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7973) Support ContainerRelaunch for Docker containers

2018-05-03 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7973:

Attachment: YARN-7973-branch-3.1.addendum.001.patch

> Support ContainerRelaunch for Docker containers
> ---
>
> Key: YARN-7973
> URL: https://issues.apache.org/jira/browse/YARN-7973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>  Labels: Docker
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-7973-branch-3.1.addendum.001.patch, 
> YARN-7973.001.patch, YARN-7973.002.patch, YARN-7973.003.patch, 
> YARN-7973.004.patch
>
>
> Prior to YARN-5366, {{container-executor}} would remove the Docker container 
> when it exited. The removal is now handled by the 
> {{DockerLinuxContainerRuntime}}. {{ContainerRelaunch}} is intended to reuse 
> the workdir from the previous attempt, and does not call {{cleanupContainer}} 
> prior to {{launchContainer}}. The container ID is reused as well. As a 
> result, the previous Docker container still exists, resulting in an error 
> from Docker indicating the a container by that name already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7973) Support ContainerRelaunch for Docker containers

2018-05-03 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462741#comment-16462741
 ] 

Eric Yang commented on YARN-7973:
-

[~jlowe] Yes, I am working on fixing this with addendum patch.  Sorry about the 
breakage.

> Support ContainerRelaunch for Docker containers
> ---
>
> Key: YARN-7973
> URL: https://issues.apache.org/jira/browse/YARN-7973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>  Labels: Docker
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-7973.001.patch, YARN-7973.002.patch, 
> YARN-7973.003.patch, YARN-7973.004.patch
>
>
> Prior to YARN-5366, {{container-executor}} would remove the Docker container 
> when it exited. The removal is now handled by the 
> {{DockerLinuxContainerRuntime}}. {{ContainerRelaunch}} is intended to reuse 
> the workdir from the previous attempt, and does not call {{cleanupContainer}} 
> prior to {{launchContainer}}. The container ID is reused as well. As a 
> result, the previous Docker container still exists, resulting in an error 
> from Docker indicating the a container by that name already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7973) Support ContainerRelaunch for Docker containers

2018-05-03 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462709#comment-16462709
 ] 

Jason Lowe commented on YARN-7973:
--

branch-3.1 is no longer building after this was committed:
{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hadoop-yarn-server-nodemanager: Compilation failure
[ERROR] 
/home/jlowe/hadoop/apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java:[931,30]
 method getContainerStatus in class 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker.DockerCommandExecutor
 cannot be applied to given types;
[ERROR] required: 
java.lang.String,org.apache.hadoop.conf.Configuration,org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor,org.apache.hadoop.yarn.server.nodemanager.Context
[ERROR] found: 
java.lang.String,org.apache.hadoop.conf.Configuration,org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor
[ERROR] reason: actual and formal argument lists differ in length
{noformat}

> Support ContainerRelaunch for Docker containers
> ---
>
> Key: YARN-7973
> URL: https://issues.apache.org/jira/browse/YARN-7973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>  Labels: Docker
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-7973.001.patch, YARN-7973.002.patch, 
> YARN-7973.003.patch, YARN-7973.004.patch
>
>
> Prior to YARN-5366, {{container-executor}} would remove the Docker container 
> when it exited. The removal is now handled by the 
> {{DockerLinuxContainerRuntime}}. {{ContainerRelaunch}} is intended to reuse 
> the workdir from the previous attempt, and does not call {{cleanupContainer}} 
> prior to {{launchContainer}}. The container ID is reused as well. As a 
> result, the previous Docker container still exists, resulting in an error 
> from Docker indicating the a container by that name already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8194) Exception when reinitializing a container using LinuxContainerExecutor

2018-05-03 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462668#comment-16462668
 ] 

Shane Kumpf commented on YARN-8194:
---

Thanks [~eyang]!

> Exception when reinitializing a container using LinuxContainerExecutor
> --
>
> Key: YARN-8194
> URL: https://issues.apache.org/jira/browse/YARN-8194
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Blocker
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8194.001.patch
>
>
> When a component instance is upgraded and the container executor is set to 
> {{org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor}}, then 
> the following exception is seen in the nodemanager:
> {code}
> Writing to cgroup task files...
> Creating local dirs...
> Can't open 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
>  for output - File exists
> Getting exit code file...
> Creating script paths...
> Full command array for failed execution:
> [/usr/local/hadoop-3.2.0-SNAPSHOT/bin/container-executor, hbase, hbase, 1, 
> application_1524242413029_0001, container_1524242413029_0001_01_02, 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/container_1524242413029_0001_01_02.tokens,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/container_1524242413029_0001_01_02.pid,
>  /tmp/hadoop-yarn/nm-local-dir, 
> /usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs, cgroups=none]
> 2018-04-20 16:50:16,641 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime:
>  Launch container failed. Exception:
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
>  ExitCodeException exitCode=33: Could not create copy file 3 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
> Could not create local files and directories
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:180)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime.launchContainer(DefaultLinuxContainerRuntime.java:118)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime.launchContainer(DelegatingLinuxContainerRuntime.java:141)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.handleLaunchForLaunchType(LinuxContainerExecutor.java:562)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:477)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:492)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:304)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:101)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: ExitCodeException exitCode=33: Could not create copy file 3 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
> Could not create local files and directories
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:1009)
> at org.apache.hadoop.util.Shell.run(Shell.java:902)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1227)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:152)
> ... 11 more
> 2018-04-20 16:50:16,642 WARN 
> 

[jira] [Commented] (YARN-8209) NPE in DeletionService

2018-05-03 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462664#comment-16462664
 ] 

Eric Yang commented on YARN-8209:
-

With the recent backport of YARN-7973 and YARN-8194, the branch-3.1 is no 
longer applying this patch properly.  I will reapply the same patch for trunk 
on branch-3.1 for correctness.

> NPE in DeletionService
> --
>
> Key: YARN-8209
> URL: https://issues.apache.org/jira/browse/YARN-8209
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Eric Badger
>Priority: Critical
>  Labels: Docker
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8209.001.patch, YARN-8209.002.patch, 
> YARN-8209.003.patch, YARN-8209.004.patch, YARN-8209.005.patch
>
>
> {code:java}
> 2018-04-25 23:38:41,039 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(63)) - Caught exception in 
> thread DeletionService #1:
> java.lang.NullPointerException
>         at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker.DockerClient.writeCommandToTempFile(DockerClient.java:109)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker.DockerCommandExecutor.executeDockerCommand(DockerCommandExecutor.java:85)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker.DockerCommandExecutor.executeStatusCommand(DockerCommandExecutor.java:192)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker.DockerCommandExecutor.getContainerStatus(DockerCommandExecutor.java:128)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.removeDockerContainer(LinuxContainerExecutor.java:935)
>         at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.deletion.task.DockerContainerDeletionTask.run(DockerContainerDeletionTask.java:61)
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>         at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8194) Exception when reinitializing a container using LinuxContainerExecutor

2018-05-03 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462658#comment-16462658
 ] 

Eric Yang commented on YARN-8194:
-

[~shaneku...@gmail.com] I agree with your assessment on this issue, and cherry 
picked this to branch 3.1 for 3.1.1 release.

> Exception when reinitializing a container using LinuxContainerExecutor
> --
>
> Key: YARN-8194
> URL: https://issues.apache.org/jira/browse/YARN-8194
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Blocker
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8194.001.patch
>
>
> When a component instance is upgraded and the container executor is set to 
> {{org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor}}, then 
> the following exception is seen in the nodemanager:
> {code}
> Writing to cgroup task files...
> Creating local dirs...
> Can't open 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
>  for output - File exists
> Getting exit code file...
> Creating script paths...
> Full command array for failed execution:
> [/usr/local/hadoop-3.2.0-SNAPSHOT/bin/container-executor, hbase, hbase, 1, 
> application_1524242413029_0001, container_1524242413029_0001_01_02, 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/container_1524242413029_0001_01_02.tokens,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/container_1524242413029_0001_01_02.pid,
>  /tmp/hadoop-yarn/nm-local-dir, 
> /usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs, cgroups=none]
> 2018-04-20 16:50:16,641 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime:
>  Launch container failed. Exception:
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
>  ExitCodeException exitCode=33: Could not create copy file 3 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
> Could not create local files and directories
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:180)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime.launchContainer(DefaultLinuxContainerRuntime.java:118)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime.launchContainer(DelegatingLinuxContainerRuntime.java:141)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.handleLaunchForLaunchType(LinuxContainerExecutor.java:562)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:477)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:492)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:304)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:101)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: ExitCodeException exitCode=33: Could not create copy file 3 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
> Could not create local files and directories
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:1009)
> at org.apache.hadoop.util.Shell.run(Shell.java:902)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1227)
> at 
> 

[jira] [Updated] (YARN-8194) Exception when reinitializing a container using LinuxContainerExecutor

2018-05-03 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8194:

Fix Version/s: 3.1.1

> Exception when reinitializing a container using LinuxContainerExecutor
> --
>
> Key: YARN-8194
> URL: https://issues.apache.org/jira/browse/YARN-8194
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Blocker
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8194.001.patch
>
>
> When a component instance is upgraded and the container executor is set to 
> {{org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor}}, then 
> the following exception is seen in the nodemanager:
> {code}
> Writing to cgroup task files...
> Creating local dirs...
> Can't open 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
>  for output - File exists
> Getting exit code file...
> Creating script paths...
> Full command array for failed execution:
> [/usr/local/hadoop-3.2.0-SNAPSHOT/bin/container-executor, hbase, hbase, 1, 
> application_1524242413029_0001, container_1524242413029_0001_01_02, 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/container_1524242413029_0001_01_02.tokens,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/container_1524242413029_0001_01_02.pid,
>  /tmp/hadoop-yarn/nm-local-dir, 
> /usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs, cgroups=none]
> 2018-04-20 16:50:16,641 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime:
>  Launch container failed. Exception:
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
>  ExitCodeException exitCode=33: Could not create copy file 3 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
> Could not create local files and directories
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:180)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime.launchContainer(DefaultLinuxContainerRuntime.java:118)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime.launchContainer(DelegatingLinuxContainerRuntime.java:141)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.handleLaunchForLaunchType(LinuxContainerExecutor.java:562)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:477)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:492)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:304)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:101)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: ExitCodeException exitCode=33: Could not create copy file 3 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
> Could not create local files and directories
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:1009)
> at org.apache.hadoop.util.Shell.run(Shell.java:902)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1227)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:152)
> ... 11 more
> 2018-04-20 16:50:16,642 WARN 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Exit code 

[jira] [Commented] (YARN-7892) Revisit NodeAttribute class structure

2018-05-03 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462653#comment-16462653
 ] 

Bibin A Chundatt commented on YARN-7892:


[~Naganarasimha]


{quote}
For both getClusterAttributes and getAttributes to node we are iterating over 
all the attribute ids two times. Can we optimize the same ?
{quote}
# ClientRMService#getAttributesToNodes -> 
NodeAttributeManager#getAttributesToNodes(). for all the unique nodeAtttributes 
in cluster we will be iterating once in ClientRMService and 
NodeAttributeManager.
# YarnClient#getAttributesToNodes API change is it required to have 
AttributeValues along with Hostname. Since we have node to NodeAttribute 
mapping is this really required. Can you share me in what case we will be able 
to use this ? Method need to be renamed to match current functionality.
# Java doc update GetAttributesToNodesRequest#setNodeAttributes
# Java doc update GetAttributesToNodesRequest#getNodeAttributes
# Java doc GetAttributesToNodesResponse#getAttributesToNodes specify fields of 
map corresponds to
# Java doc GetAttributesToNodesResponse class level
# GetClusterNodeAttributesResponse#setNodeAttributes(Set 
attributes); java doc mismatch
# YarnClient for methods updated java doc need to be updated
{code}
  /**
   * Given a attribute set, return what all Nodes have attribute mapped to it.
   * If the attributes set is null or empty, all attributes mapping are
   * returned.
   *
   * @return a Map of attributes to set of hostnames.
   */
  public abstract Map> 
getAttributesToNodes(
  Set attributes);
{code}
# NodeAttributeManager java doc update
{noformat}
../patch/YARN-7892-YARN-3409.005.patch:1651: trailing whitespace.
for (Map.Entry> attrib : 
../patch/YARN-7892-YARN-3409.005.patch:1917: trailing whitespace.
  for (Entry attributeEntry : 
warning: 2 lines add whitespace errors.
{noformat}
# Fix whitespace errors

> Revisit NodeAttribute class structure
> -
>
> Key: YARN-7892
> URL: https://issues.apache.org/jira/browse/YARN-7892
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-7892-YARN-3409.001.patch, 
> YARN-7892-YARN-3409.002.patch, YARN-7892-YARN-3409.003.WIP.patch, 
> YARN-7892-YARN-3409.003.patch, YARN-7892-YARN-3409.004.patch, 
> YARN-7892-YARN-3409.005.patch
>
>
> In the existing structure, we had kept the type and value along with the 
> attribute which would create confusion to the user to understand the APIs as 
> they would not be clear as to what needs to be sent for type and value while 
> fetching the mappings for node(s).
> As well as equals will not make sense when we compare only for prefix and 
> name where as values for them might be different.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7973) Support ContainerRelaunch for Docker containers

2018-05-03 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7973:

Fix Version/s: 3.1.1

Cherry-picked for 3.1.1.

> Support ContainerRelaunch for Docker containers
> ---
>
> Key: YARN-7973
> URL: https://issues.apache.org/jira/browse/YARN-7973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>  Labels: Docker
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-7973.001.patch, YARN-7973.002.patch, 
> YARN-7973.003.patch, YARN-7973.004.patch
>
>
> Prior to YARN-5366, {{container-executor}} would remove the Docker container 
> when it exited. The removal is now handled by the 
> {{DockerLinuxContainerRuntime}}. {{ContainerRelaunch}} is intended to reuse 
> the workdir from the previous attempt, and does not call {{cleanupContainer}} 
> prior to {{launchContainer}}. The container ID is reused as well. As a 
> result, the previous Docker container still exists, resulting in an error 
> from Docker indicating the a container by that name already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7933) [atsv2 read acls] Add TimelineWriter#writeDomain

2018-05-03 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462638#comment-16462638
 ] 

Haibo Chen commented on YARN-7933:
--

Thanks [~rohithsharma] for updating the patch!  Looks good to me in general. I 
have a few questions:

1) Does the patch require a domain id for every timelineEntity? Or what happens 
if no domain id set?

2) The /domain endpoint takes appId as a query param. Is it the same as the app 
id in the TimelineCollector context?

3) Does it make sense in this patch to add in memory cache of domain in 
TimelineCollector? Given the patch deals with the write path, probably so.

4) In HBaseTimelineWriterImpl, null-checking is done solely for clusterId. Do 
we need to check for domainId as well? This goes back to 1)

> [atsv2 read acls] Add TimelineWriter#writeDomain 
> -
>
> Key: YARN-7933
> URL: https://issues.apache.org/jira/browse/YARN-7933
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-7933.01.patch, YARN-7933.02.patch, 
> YARN-7933.03.patch, YARN-7933.04.patch
>
>
>  
> Add an API TimelineWriter#writeDomain for writing the domain info 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8201) Skip stacktrace of ApplicationNotFoundException at server side

2018-05-03 Thread Bilwa S T (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-8201:

Attachment: YARN-8201-002.patch

> Skip stacktrace of ApplicationNotFoundException at server side
> --
>
> Key: YARN-8201
> URL: https://issues.apache.org/jira/browse/YARN-8201
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Minor
> Attachments: YARN-8201-001.patch, YARN-8201-002.patch
>
>
> Currently full stack trace of exception like 
> ApplicationNotFoundException,ApplicationAttemptNotFoundException etc  are 
> logged at server side..Wrong client operation could increase server logs.
> {{Server.addTerseExceptions}} could be used to reduce server side logs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8191) Fair scheduler: queue deletion without RM restart

2018-05-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462617#comment-16462617
 ] 

genericqa commented on YARN-8191:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
59s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 44m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  2s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 89 unchanged - 0 fixed = 90 total (was 89) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m  3s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8191 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921758/YARN-8191.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8989f3fcac55 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 85381c7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20581/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Commented] (YARN-7818) Remove privileged operation warnings during container launch for DefaultLinuxContainerRuntime

2018-05-03 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462588#comment-16462588
 ] 

Billie Rinaldi commented on YARN-7818:
--

Looks like this change should also be made in the docker runtime. This seems 
like a good improvement that will remove a lot of unnecessary and misleading 
logging. Thanks [~shaneku...@gmail.com]!

> Remove privileged operation warnings during container launch for 
> DefaultLinuxContainerRuntime
> -
>
> Key: YARN-7818
> URL: https://issues.apache.org/jira/browse/YARN-7818
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Shane Kumpf
>Priority: Major
> Attachments: YARN-7818.001.patch
>
>
> steps:
>  1) Run Dshell Application
> {code:java}
> yarn  org.apache.hadoop.yarn.applications.distributedshell.Client -jar 
> /usr/hdp/3.0.0.0-751/hadoop-yarn/hadoop-yarn-applications-distributedshell-*.jar
>  -keep_containers_across_application_attempts -timeout 90 -shell_command 
> "sleep 110" -num_containers 4{code}
> 2) Find out host where AM is running. 
>  3) Find Containers launched by application
>  4) Restart NM where AM is running
>  5) Validate that new attempt is not started and containers launched before 
> restart are in RUNNING state.
> In this test, step#5 fails because containers failed to launch with error 143
> {code:java}
> 2018-01-24 09:48:30,547 INFO  container.ContainerImpl 
> (ContainerImpl.java:handle(2108)) - Container 
> container_e04_1516787230461_0001_01_03 transitioned from RUNNING to 
> KILLING
> 2018-01-24 09:48:30,547 INFO  launcher.ContainerLaunch 
> (ContainerLaunch.java:cleanupContainer(668)) - Cleaning up container 
> container_e04_1516787230461_0001_01_03
> 2018-01-24 09:48:30,552 WARN  privileged.PrivilegedOperationExecutor 
> (PrivilegedOperationExecutor.java:executePrivilegedOperation(174)) - Shell 
> execution returned exit code: 143. Privileged Execution Operation Stderr:
> Stdout: main : command provided 1
> main : run as user is hrt_qa
> main : requested yarn user is hrt_qa
> Getting exit code file...
> Creating script paths...
> Writing pid file...
> Writing to tmp file 
> /grid/0/hadoop/yarn/local/nmPrivate/application_1516787230461_0001/container_e04_1516787230461_0001_01_03/container_e04_1516787230461_0001_01_03.pid.tmp
> Writing to cgroup task files...
> Creating local dirs...
> Launching container...
> Getting exit code file...
> Creating script paths...
> Full command array for failed execution:
> [/usr/hdp/3.0.0.0-751/hadoop-yarn/bin/container-executor, hrt_qa, hrt_qa, 1, 
> application_1516787230461_0001, container_e04_1516787230461_0001_01_03, 
> /grid/0/hadoop/yarn/local/usercache/hrt_qa/appcache/application_1516787230461_0001/container_e04_1516787230461_0001_01_03,
>  
> /grid/0/hadoop/yarn/local/nmPrivate/application_1516787230461_0001/container_e04_1516787230461_0001_01_03/launch_container.sh,
>  
> /grid/0/hadoop/yarn/local/nmPrivate/application_1516787230461_0001/container_e04_1516787230461_0001_01_03/container_e04_1516787230461_0001_01_03.tokens,
>  
> /grid/0/hadoop/yarn/local/nmPrivate/application_1516787230461_0001/container_e04_1516787230461_0001_01_03/container_e04_1516787230461_0001_01_03.pid,
>  /grid/0/hadoop/yarn/local, /grid/0/hadoop/yarn/log, cgroups=none]
> 2018-01-24 09:48:30,553 WARN  runtime.DefaultLinuxContainerRuntime 
> (DefaultLinuxContainerRuntime.java:launchContainer(127)) - Launch container 
> failed. Exception:
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
>  ExitCodeException exitCode=143:
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:180)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime.launchContainer(DefaultLinuxContainerRuntime.java:124)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime.launchContainer(DelegatingLinuxContainerRuntime.java:152)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:549)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:465)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:285)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:95)
> at 

[jira] [Commented] (YARN-8242) YARN NM: OOM error while reading back the state store on recovery

2018-05-03 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462503#comment-16462503
 ] 

Jason Lowe commented on YARN-8242:
--

Thanks for the report and the patch!

I'm not a fan of exposing leveldb-specifics out of the leveldb NM state store.  
It makes it much harder to replace the state store with something else.  
Essentially all we need here is an iterable abstraction of the recovery state, 
and we don't need to expose a leveldb iterator to do that directly.

The state store has a getIterator method but it's hardcoded to iterate only 
container state.  That's confusing, since the state has a lot more than just 
container state in it.  Rather than simply expose one iterator, and 
specifically a leveldb iterator, I think it would be much cleaner to have 
loadContainerState return an Iterator and callers can 
iterate through the loaded containers.  The state store can have a helper class 
that implements the Iterator interface but hides the leveldb details from the 
caller.  A similar approach can be used for other recovered lists like 
application state, localized resources, etc. if it's worth it for those as well.

The null state store should return a valid iterator that has no elements to 
iterate (e.g.: Collections.emptyIterator) rather than null.  The latter is 
going to lead to a lot of NPEs in unit tests or unnecessary null checks in the 
main code.

A significant amount of changes in the patch are a result of whitespace 
reformatting unrelated to the nature of the patch and should be removed.  In 
addition the wildcard imports should be removed.


> YARN NM: OOM error while reading back the state store on recovery
> -
>
> Key: YARN-8242
> URL: https://issues.apache.org/jira/browse/YARN-8242
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.2.0
>Reporter: Kanwaljeet Sachdev
>Assignee: Kanwaljeet Sachdev
>Priority: Blocker
> Attachments: YARN-8242.001.patch
>
>
> On startup the NM reads its state store and builds a list of application in 
> the state store to process. If the number of applications in the state store 
> is large and have a lot of "state" connected to it the NM can run OOM and 
> never get to the point that it can start processing the recovery.
> Since it never starts the recovery there is no way for the NM to ever pass 
> this point. It will require a change in heap size to get the NM started.
>  
> Following is the stack trace
> {code:java}
> at java.lang.OutOfMemoryError. (OutOfMemoryError.java:48) at 
> com.google.protobuf.ByteString.copyFrom (ByteString.java:192) at 
> com.google.protobuf.CodedInputStream.readBytes (CodedInputStream.java:324) at 
> org.apache.hadoop.yarn.proto.YarnProtos$StringStringMapProto. 
> (YarnProtos.java:47069) at 
> org.apache.hadoop.yarn.proto.YarnProtos$StringStringMapProto. 
> (YarnProtos.java:47014) at 
> org.apache.hadoop.yarn.proto.YarnProtos$StringStringMapProto$1.parsePartialFrom
>  (YarnProtos.java:47102) at 
> org.apache.hadoop.yarn.proto.YarnProtos$StringStringMapProto$1.parsePartialFrom
>  (YarnProtos.java:47097) at com.google.protobuf.CodedInputStream.readMessage 
> (CodedInputStream.java:309) at 
> org.apache.hadoop.yarn.proto.YarnProtos$ContainerLaunchContextProto. 
> (YarnProtos.java:41016) at 
> org.apache.hadoop.yarn.proto.YarnProtos$ContainerLaunchContextProto. 
> (YarnProtos.java:40942) at 
> org.apache.hadoop.yarn.proto.YarnProtos$ContainerLaunchContextProto$1.parsePartialFrom
>  (YarnProtos.java:41080) at 
> org.apache.hadoop.yarn.proto.YarnProtos$ContainerLaunchContextProto$1.parsePartialFrom
>  (YarnProtos.java:41075) at com.google.protobuf.CodedInputStream.readMessage 
> (CodedInputStream.java:309) at 
> org.apache.hadoop.yarn.proto.YarnServiceProtos$StartContainerRequestProto.
>  (YarnServiceProtos.java:24517) at 
> org.apache.hadoop.yarn.proto.YarnServiceProtos$StartContainerRequestProto.
>  (YarnServiceProtos.java:24464) at 
> org.apache.hadoop.yarn.proto.YarnServiceProtos$StartContainerRequestProto$1.parsePartialFrom
>  (YarnServiceProtos.java:24568) at 
> org.apache.hadoop.yarn.proto.YarnServiceProtos$StartContainerRequestProto$1.parsePartialFrom
>  (YarnServiceProtos.java:24563) at 
> com.google.protobuf.AbstractParser.parsePartialFrom (AbstractParser.java:141) 
> at com.google.protobuf.AbstractParser.parseFrom (AbstractParser.java:176) at 
> com.google.protobuf.AbstractParser.parseFrom (AbstractParser.java:188) at 
> com.google.protobuf.AbstractParser.parseFrom (AbstractParser.java:193) at 
> com.google.protobuf.AbstractParser.parseFrom (AbstractParser.java:49) at 
> org.apache.hadoop.yarn.proto.YarnServiceProtos$StartContainerRequestProto.parseFrom
>  (YarnServiceProtos.java:24739) at 
> 

[jira] [Commented] (YARN-7892) Revisit NodeAttribute class structure

2018-05-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462459#comment-16462459
 ] 

genericqa commented on YARN-7892:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 6s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
56s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
17s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
43s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m  
0s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
19s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 28m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 12s{color} | {color:orange} root: The patch generated 9 new + 219 unchanged 
- 1 fixed = 228 total (was 220) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 56s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
37s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 34s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m  
4s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 44s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 25m 48s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}154m 35s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m 
34s{color} | {color:red} The patch generated 1 ASF 

[jira] [Comment Edited] (YARN-8194) Exception when reinitializing a container using LinuxContainerExecutor

2018-05-03 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462451#comment-16462451
 ] 

Shane Kumpf edited comment on YARN-8194 at 5/3/18 1:37 PM:
---

{quote}There is no container relaunch commited to branch-3.1.{quote}
It seems there is a bit of confusion here. The Relaunch feature was added in 
Hadoop 2.9 via YARN-3998 and does exist in branch-3.1. As this patch fixes an 
issue that causes the NM to shutdown if someone tries out upgrade, I think this 
is needed in branch-3.1 as well, since that upgrade code has been committed 
there. It looks like this patch applies to branch-3.1 without issue.


was (Author: shaneku...@gmail.com):
{quote}There is no container relaunch commited to branch-3.1.\{quote}

It seems there is a bit of confusion here. The Relaunch feature was added in 
Hadoop 2.9 via YARN-3998 and does exist in branch-3.1. As this patch fixes an 
issue that causes the NM to shutdown if someone tries out upgrade, I think this 
is needed in branch-3.1 as well, since that upgrade code has been committed 
there. It looks like this patch applies to branch-3.1 without issue.

> Exception when reinitializing a container using LinuxContainerExecutor
> --
>
> Key: YARN-8194
> URL: https://issues.apache.org/jira/browse/YARN-8194
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Blocker
> Fix For: 3.2.0
>
> Attachments: YARN-8194.001.patch
>
>
> When a component instance is upgraded and the container executor is set to 
> {{org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor}}, then 
> the following exception is seen in the nodemanager:
> {code}
> Writing to cgroup task files...
> Creating local dirs...
> Can't open 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
>  for output - File exists
> Getting exit code file...
> Creating script paths...
> Full command array for failed execution:
> [/usr/local/hadoop-3.2.0-SNAPSHOT/bin/container-executor, hbase, hbase, 1, 
> application_1524242413029_0001, container_1524242413029_0001_01_02, 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/container_1524242413029_0001_01_02.tokens,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/container_1524242413029_0001_01_02.pid,
>  /tmp/hadoop-yarn/nm-local-dir, 
> /usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs, cgroups=none]
> 2018-04-20 16:50:16,641 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime:
>  Launch container failed. Exception:
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
>  ExitCodeException exitCode=33: Could not create copy file 3 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
> Could not create local files and directories
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:180)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime.launchContainer(DefaultLinuxContainerRuntime.java:118)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime.launchContainer(DelegatingLinuxContainerRuntime.java:141)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.handleLaunchForLaunchType(LinuxContainerExecutor.java:562)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:477)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:492)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:304)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:101)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> 

[jira] [Commented] (YARN-8194) Exception when reinitializing a container using LinuxContainerExecutor

2018-05-03 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462451#comment-16462451
 ] 

Shane Kumpf commented on YARN-8194:
---

{quote}There is no container relaunch commited to branch-3.1.\{quote}

It seems there is a bit of confusion here. The Relaunch feature was added in 
Hadoop 2.9 via YARN-3998 and does exist in branch-3.1. As this patch fixes an 
issue that causes the NM to shutdown if someone tries out upgrade, I think this 
is needed in branch-3.1 as well, since that upgrade code has been committed 
there. It looks like this patch applies to branch-3.1 without issue.

> Exception when reinitializing a container using LinuxContainerExecutor
> --
>
> Key: YARN-8194
> URL: https://issues.apache.org/jira/browse/YARN-8194
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Blocker
> Fix For: 3.2.0
>
> Attachments: YARN-8194.001.patch
>
>
> When a component instance is upgraded and the container executor is set to 
> {{org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor}}, then 
> the following exception is seen in the nodemanager:
> {code}
> Writing to cgroup task files...
> Creating local dirs...
> Can't open 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
>  for output - File exists
> Getting exit code file...
> Creating script paths...
> Full command array for failed execution:
> [/usr/local/hadoop-3.2.0-SNAPSHOT/bin/container-executor, hbase, hbase, 1, 
> application_1524242413029_0001, container_1524242413029_0001_01_02, 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/container_1524242413029_0001_01_02.tokens,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/container_1524242413029_0001_01_02.pid,
>  /tmp/hadoop-yarn/nm-local-dir, 
> /usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs, cgroups=none]
> 2018-04-20 16:50:16,641 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime:
>  Launch container failed. Exception:
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
>  ExitCodeException exitCode=33: Could not create copy file 3 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
> Could not create local files and directories
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:180)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime.launchContainer(DefaultLinuxContainerRuntime.java:118)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime.launchContainer(DelegatingLinuxContainerRuntime.java:141)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.handleLaunchForLaunchType(LinuxContainerExecutor.java:562)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:477)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:492)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:304)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:101)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: ExitCodeException exitCode=33: Could not create copy file 3 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
> Could not create local files and directories
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:1009)
> 

[jira] [Commented] (YARN-7973) Support ContainerRelaunch for Docker containers

2018-05-03 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462446#comment-16462446
 ] 

Shane Kumpf commented on YARN-7973:
---

[~eyang] - I realized after YARN-8194 that this isn't in branch-3.1. I think it 
should be as it fixes an issue introduced by YARN-5366 in the case of a 
RELAUNCH. YARN-5366 was committed in 3.1. Without this, RELAUNCH is broken when 
running Docker based Native Services. It looks like it applies to branch-3.1 
still, but let me know if you'd like me to put up a new patch for branch-3.1

> Support ContainerRelaunch for Docker containers
> ---
>
> Key: YARN-7973
> URL: https://issues.apache.org/jira/browse/YARN-7973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>  Labels: Docker
> Fix For: 3.2.0
>
> Attachments: YARN-7973.001.patch, YARN-7973.002.patch, 
> YARN-7973.003.patch, YARN-7973.004.patch
>
>
> Prior to YARN-5366, {{container-executor}} would remove the Docker container 
> when it exited. The removal is now handled by the 
> {{DockerLinuxContainerRuntime}}. {{ContainerRelaunch}} is intended to reuse 
> the workdir from the previous attempt, and does not call {{cleanupContainer}} 
> prior to {{launchContainer}}. The container ID is reused as well. As a 
> result, the previous Docker container still exists, resulting in an error 
> from Docker indicating the a container by that name already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8217) RmAuthenticationFilterInitializer /TimelineAuthenticationFilterInitializer should use Configuration.getPropsWithPrefix instead of iterator

2018-05-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462414#comment-16462414
 ] 

Hudson commented on YARN-8217:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14118 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14118/])
YARN-8217. RmAuthenticationFilterInitializer and (rohithsharmaks: rev 
ee2ce923a922bfc3e89ad6f0f6a25e776fe91ffb)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/security/http/RMAuthenticationFilterInitializer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/timeline/security/TimelineAuthenticationFilterInitializer.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestRMAuthenticationFilter.java


> RmAuthenticationFilterInitializer /TimelineAuthenticationFilterInitializer 
> should use Configuration.getPropsWithPrefix instead of iterator
> --
>
> Key: YARN-8217
> URL: https://issues.apache.org/jira/browse/YARN-8217
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Fix For: 3.2.0, 3.1.1, 3.0.3
>
> Attachments: YARN-8217.1.patch, YARN-8217.2.patch
>
>
> HADOOP-15411 fixed a similar issue for AuthenticationFilterInitializer. This 
> issue can occur in 
> RmAuthenticationFilterInitializer/TimelineAuthenticationFilterInitializer as 
> well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8191) Fair scheduler: queue deletion without RM restart

2018-05-03 Thread Szilard Nemeth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462385#comment-16462385
 ] 

Szilard Nemeth commented on YARN-8191:
--

Hi [~grepas]!

Thanks for the latest patch, LGTM.

+1 (non-binding)

> Fair scheduler: queue deletion without RM restart
> -
>
> Key: YARN-8191
> URL: https://issues.apache.org/jira/browse/YARN-8191
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.1
>Reporter: Gergo Repas
>Assignee: Gergo Repas
>Priority: Major
> Attachments: Queue Deletion in Fair Scheduler.pdf, 
> YARN-8191.000.patch, YARN-8191.001.patch, YARN-8191.002.patch, 
> YARN-8191.003.patch, YARN-8191.004.patch, YARN-8191.005.patch
>
>
> The Fair Scheduler never cleans up queues even if they are deleted in the 
> allocation file, or were dynamically created and are never going to be used 
> again. Queues always remain in memory which leads to two following issues.
>  # Steady fairshares aren’t calculated correctly due to remaining queues
>  # WebUI shows deleted queues, which is confusing for users (YARN-4022).
> We want to support proper queue deletion without restarting the Resource 
> Manager:
>  # Static queues without any entries that are removed from fair-scheduler.xml 
> should be deleted from memory.
>  # Dynamic queues without any entries should be deleted.
>  # RM Web UI should only show the queues defined in the scheduler at that 
> point in time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8191) Fair scheduler: queue deletion without RM restart

2018-05-03 Thread Gergo Repas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462364#comment-16462364
 ] 

Gergo Repas commented on YARN-8191:
---

[~snemeth] Thanks for the review! I've added these comments to the updated 
(v005) patch.

> Fair scheduler: queue deletion without RM restart
> -
>
> Key: YARN-8191
> URL: https://issues.apache.org/jira/browse/YARN-8191
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.1
>Reporter: Gergo Repas
>Assignee: Gergo Repas
>Priority: Major
> Attachments: Queue Deletion in Fair Scheduler.pdf, 
> YARN-8191.000.patch, YARN-8191.001.patch, YARN-8191.002.patch, 
> YARN-8191.003.patch, YARN-8191.004.patch, YARN-8191.005.patch
>
>
> The Fair Scheduler never cleans up queues even if they are deleted in the 
> allocation file, or were dynamically created and are never going to be used 
> again. Queues always remain in memory which leads to two following issues.
>  # Steady fairshares aren’t calculated correctly due to remaining queues
>  # WebUI shows deleted queues, which is confusing for users (YARN-4022).
> We want to support proper queue deletion without restarting the Resource 
> Manager:
>  # Static queues without any entries that are removed from fair-scheduler.xml 
> should be deleted from memory.
>  # Dynamic queues without any entries should be deleted.
>  # RM Web UI should only show the queues defined in the scheduler at that 
> point in time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8191) Fair scheduler: queue deletion without RM restart

2018-05-03 Thread Gergo Repas (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergo Repas updated YARN-8191:
--
Attachment: YARN-8191.005.patch

> Fair scheduler: queue deletion without RM restart
> -
>
> Key: YARN-8191
> URL: https://issues.apache.org/jira/browse/YARN-8191
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.1
>Reporter: Gergo Repas
>Assignee: Gergo Repas
>Priority: Major
> Attachments: Queue Deletion in Fair Scheduler.pdf, 
> YARN-8191.000.patch, YARN-8191.001.patch, YARN-8191.002.patch, 
> YARN-8191.003.patch, YARN-8191.004.patch, YARN-8191.005.patch
>
>
> The Fair Scheduler never cleans up queues even if they are deleted in the 
> allocation file, or were dynamically created and are never going to be used 
> again. Queues always remain in memory which leads to two following issues.
>  # Steady fairshares aren’t calculated correctly due to remaining queues
>  # WebUI shows deleted queues, which is confusing for users (YARN-4022).
> We want to support proper queue deletion without restarting the Resource 
> Manager:
>  # Static queues without any entries that are removed from fair-scheduler.xml 
> should be deleted from memory.
>  # Dynamic queues without any entries should be deleted.
>  # RM Web UI should only show the queues defined in the scheduler at that 
> point in time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7892) Revisit NodeAttribute class structure

2018-05-03 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462334#comment-16462334
 ] 

Naganarasimha G R commented on YARN-7892:
-

Hi [~bibinchundatt],

     Hope you could specify comments refers which classes else its kind of 
difficult which part of the code you are referring .
 # not sure which class you are referring to can you provide more information 
 # nice catch as default is set i think we can remove from 60 to 62 ?
 # i think you are referring to NodeAttributesManagerImpl, if so i think 
current code is creating a new collection so the backend structure does not get 
impacted. So do we really need to a make unmodifiable collection ?
 # Can you more specifc which class requires java docs changes (i can see it 
requires for NodeAttributesManager.getAttributesToNodes) any thing else you 
have in your mind.
 # Hope you are referred to the latest patch as i had tested that in local 
machine and seems to be working fine. 

> Revisit NodeAttribute class structure
> -
>
> Key: YARN-7892
> URL: https://issues.apache.org/jira/browse/YARN-7892
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-7892-YARN-3409.001.patch, 
> YARN-7892-YARN-3409.002.patch, YARN-7892-YARN-3409.003.WIP.patch, 
> YARN-7892-YARN-3409.003.patch, YARN-7892-YARN-3409.004.patch, 
> YARN-7892-YARN-3409.005.patch
>
>
> In the existing structure, we had kept the type and value along with the 
> attribute which would create confusion to the user to understand the APIs as 
> they would not be clear as to what needs to be sent for type and value while 
> fetching the mappings for node(s).
> As well as equals will not make sense when we compare only for prefix and 
> name where as values for them might be different.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8191) Fair scheduler: queue deletion without RM restart

2018-05-03 Thread Szilard Nemeth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462289#comment-16462289
 ] 

Szilard Nemeth commented on YARN-8191:
--

Hi [~grepas]!

Thanks for the updated patch.

LGTM, except one thing we discussed, please add javadoc comments explaining how 
org.apache.hadoop.
{code:java}
assignedApps.remove{code}
in yarn.server.resourcemanager.scheduler.fair.FSLeafQueue#addApp because it's 
not immediately clear from the code.

Maybe another comment for the field itself (assignedApps) would be helpful.

> Fair scheduler: queue deletion without RM restart
> -
>
> Key: YARN-8191
> URL: https://issues.apache.org/jira/browse/YARN-8191
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.1
>Reporter: Gergo Repas
>Assignee: Gergo Repas
>Priority: Major
> Attachments: Queue Deletion in Fair Scheduler.pdf, 
> YARN-8191.000.patch, YARN-8191.001.patch, YARN-8191.002.patch, 
> YARN-8191.003.patch, YARN-8191.004.patch
>
>
> The Fair Scheduler never cleans up queues even if they are deleted in the 
> allocation file, or were dynamically created and are never going to be used 
> again. Queues always remain in memory which leads to two following issues.
>  # Steady fairshares aren’t calculated correctly due to remaining queues
>  # WebUI shows deleted queues, which is confusing for users (YARN-4022).
> We want to support proper queue deletion without restarting the Resource 
> Manager:
>  # Static queues without any entries that are removed from fair-scheduler.xml 
> should be deleted from memory.
>  # Dynamic queues without any entries should be deleted.
>  # RM Web UI should only show the queues defined in the scheduler at that 
> point in time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7892) Revisit NodeAttribute class structure

2018-05-03 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462276#comment-16462276
 ] 

Bibin A Chundatt commented on YARN-7892:


[~Naganarasimha]

Please find the comments

# For both getClusterAttributes and getAttributes to node we are iterating over 
all the attribute ids two times. Can we optimize the same ?
{code}
57@Override
58public String getAttributePrefix() {
59  NodeAttributeIDProtoOrBuilder p = viaProto ? proto : builder;
60  if (!p.hasAttributePrefix()) {
61return null;
62  }
63  return p.getAttributePrefix();
64}
{code}
# The default value should be either empty/CENTRLIZED rt ?? 
# getCluster attrributes,Get Nodes to Attributes, Get Attributes to Node IIUC 
we should return unmodifiable collection
# Can you update the java docs for all API's since the signature is changed.
# Test case failures looks related please handle the same.



> Revisit NodeAttribute class structure
> -
>
> Key: YARN-7892
> URL: https://issues.apache.org/jira/browse/YARN-7892
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-7892-YARN-3409.001.patch, 
> YARN-7892-YARN-3409.002.patch, YARN-7892-YARN-3409.003.WIP.patch, 
> YARN-7892-YARN-3409.003.patch, YARN-7892-YARN-3409.004.patch, 
> YARN-7892-YARN-3409.005.patch
>
>
> In the existing structure, we had kept the type and value along with the 
> attribute which would create confusion to the user to understand the APIs as 
> they would not be clear as to what needs to be sent for type and value while 
> fetching the mappings for node(s).
> As well as equals will not make sense when we compare only for prefix and 
> name where as values for them might be different.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6630) Container worker dir could not recover when NM restart

2018-05-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462267#comment-16462267
 ] 

genericqa commented on YARN-6630:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 6 new + 186 unchanged - 2 fixed = 192 total (was 188) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
29s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-6630 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921723/YARN-6630.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f7630068ff25 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 85381c7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20580/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20580/testReport/ |
| Max. process+thread count | 342 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 

[jira] [Updated] (YARN-6589) Recover all resources when NM restart

2018-05-03 Thread Yang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated YARN-6589:

Release Note:   (was: ContainerImpl#getResource() has been changed to get 
from containerTokenIdentifier and containerTokenIdentifier could be recovered 
correctly. Just close this jira as Won't Fix)

> Recover all resources when NM restart
> -
>
> Key: YARN-6589
> URL: https://issues.apache.org/jira/browse/YARN-6589
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Blocker
> Attachments: YARN-6589-YARN-3926.001.patch, YARN-6589.001.patch, 
> YARN-6589.002.patch
>
>
> When NM restart, containers will be recovered. However, only memory and 
> vcores in capability have been recovered. All resources need to be recovered.
> {code:title=ContainerImpl.java}
>   // resource capability had been updated before NM was down
>   this.resource = 
> Resource.newInstance(recoveredCapability.getMemorySize(),
>   recoveredCapability.getVirtualCores());
> {code}
> It should be like this.
> {code:title=ContainerImpl.java}
>   // resource capability had been updated before NM was down
>   // need to recover all resources, not only 
>   this.resource = Resources.clone(recoveredCapability);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6589) Recover all resources when NM restart

2018-05-03 Thread Yang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462233#comment-16462233
 ] 

Yang Wang commented on YARN-6589:
-

ContainerImpl#getResource() has been changed to get from 
containerTokenIdentifier and containerTokenIdentifier could be recovered 
correctly. Just close this jira as Won't Fix

> Recover all resources when NM restart
> -
>
> Key: YARN-6589
> URL: https://issues.apache.org/jira/browse/YARN-6589
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Blocker
> Attachments: YARN-6589-YARN-3926.001.patch, YARN-6589.001.patch, 
> YARN-6589.002.patch
>
>
> When NM restart, containers will be recovered. However, only memory and 
> vcores in capability have been recovered. All resources need to be recovered.
> {code:title=ContainerImpl.java}
>   // resource capability had been updated before NM was down
>   this.resource = 
> Resource.newInstance(recoveredCapability.getMemorySize(),
>   recoveredCapability.getVirtualCores());
> {code}
> It should be like this.
> {code:title=ContainerImpl.java}
>   // resource capability had been updated before NM was down
>   // need to recover all resources, not only 
>   this.resource = Resources.clone(recoveredCapability);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6630) Container worker dir could not recover when NM restart

2018-05-03 Thread Yang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated YARN-6630:

Attachment: YARN-6630.003.patch

> Container worker dir could not recover when NM restart
> --
>
> Key: YARN-6630
> URL: https://issues.apache.org/jira/browse/YARN-6630
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Major
> Attachments: YARN-6630.001.patch, YARN-6630.002.patch, 
> YARN-6630.003.patch
>
>
> When ContainerRetryPolicy is NEVER_RETRY, container worker dir will not be 
> saved in NM state store. 
> {code:title=ContainerLaunch.java}
> ...
>   private void recordContainerWorkDir(ContainerId containerId,
>   String workDir) throws IOException{
> container.setWorkDir(workDir);
> if (container.isRetryContextSet()) {
>   context.getNMStateStore().storeContainerWorkDir(containerId, workDir);
> }
>   }
> {code}
> Then NM restarts, container.workDir could not recover and is null, and may 
> cause some exceptions.
> We already have a problem, after NM restart, we send a resource localization 
> request while container is running(YARN-1503), then NM will fail because of 
> the following exception.
> So, container.workdir always need to be saved in NM state store.
> {code:title=ContainerImpl.java}
>   static class ResourceLocalizedWhileRunningTransition
>   extends ContainerTransition {
> ...
>   String linkFile = new Path(container.workDir, link).toString();
> ...
> {code}
> {code}
> java.lang.IllegalArgumentException: Can not create a Path from a null string
> at org.apache.hadoop.fs.Path.checkPathArg(Path.java:159)
> at org.apache.hadoop.fs.Path.(Path.java:175)
> at org.apache.hadoop.fs.Path.(Path.java:110)
> ... ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7481) Gpu locality support for Better AI scheduling

2018-05-03 Thread Chen Qingcha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Qingcha updated YARN-7481:
---
Attachment: hadoop-2.7.2.gpu-port.patch

> Gpu locality support for Better AI scheduling
> -
>
> Key: YARN-7481
> URL: https://issues.apache.org/jira/browse/YARN-7481
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api, RM, yarn
>Affects Versions: 2.7.2
>Reporter: Chen Qingcha
>Priority: Major
> Fix For: 2.7.2
>
> Attachments: GPU locality support for Job scheduling.pdf, 
> hadoop-2.7.2-gpu.patch, hadoop-2.7.2.gpu-port.patch, 
> hadoop-2.7.2.port-gpu.patch
>
>   Original Estimate: 1,344h
>  Remaining Estimate: 1,344h
>
> We enhance Hadoop with GPU support for better AI job scheduling. 
> Currently, YARN-3926 also supports GPU scheduling, which treats GPU as 
> countable resource. 
> However, GPU placement is also very important to deep learning job for better 
> efficiency.
>  For example, a 2-GPU job runs on gpu {0,1} could be faster than run on gpu 
> {0, 7}, if GPU 0 and 1 are under the same PCI-E switch while 0 and 7 are not.
>  We add the support to Hadoop 2.7.2 to enable GPU locality scheduling, which 
> support fine-grained GPU placement. 
> A 64-bits bitmap is added to yarn Resource, which indicates both GPU usage 
> and locality information in a node (up to 64 GPUs per node). '1' means 
> available and '0' otherwise in the corresponding position of the bit.   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7481) Gpu locality support for Better AI scheduling

2018-05-03 Thread Chen Qingcha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Qingcha updated YARN-7481:
---
Attachment: (was: hadoop-2.7.2-gpu-port.patch)

> Gpu locality support for Better AI scheduling
> -
>
> Key: YARN-7481
> URL: https://issues.apache.org/jira/browse/YARN-7481
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api, RM, yarn
>Affects Versions: 2.7.2
>Reporter: Chen Qingcha
>Priority: Major
> Fix For: 2.7.2
>
> Attachments: GPU locality support for Job scheduling.pdf, 
> hadoop-2.7.2-gpu.patch, hadoop-2.7.2.port-gpu.patch
>
>   Original Estimate: 1,344h
>  Remaining Estimate: 1,344h
>
> We enhance Hadoop with GPU support for better AI job scheduling. 
> Currently, YARN-3926 also supports GPU scheduling, which treats GPU as 
> countable resource. 
> However, GPU placement is also very important to deep learning job for better 
> efficiency.
>  For example, a 2-GPU job runs on gpu {0,1} could be faster than run on gpu 
> {0, 7}, if GPU 0 and 1 are under the same PCI-E switch while 0 and 7 are not.
>  We add the support to Hadoop 2.7.2 to enable GPU locality scheduling, which 
> support fine-grained GPU placement. 
> A 64-bits bitmap is added to yarn Resource, which indicates both GPU usage 
> and locality information in a node (up to 64 GPUs per node). '1' means 
> available and '0' otherwise in the corresponding position of the bit.   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8201) Skip stacktrace of ApplicationNotFoundException at server side

2018-05-03 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462099#comment-16462099
 ] 

Bibin A Chundatt commented on YARN-8201:


[~BilwaST]
Could you please handle checkstyle and whitespace errors.

> Skip stacktrace of ApplicationNotFoundException at server side
> --
>
> Key: YARN-8201
> URL: https://issues.apache.org/jira/browse/YARN-8201
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Minor
> Attachments: YARN-8201-001.patch
>
>
> Currently full stack trace of exception like 
> ApplicationNotFoundException,ApplicationAttemptNotFoundException etc  are 
> logged at server side..Wrong client operation could increase server logs.
> {{Server.addTerseExceptions}} could be used to reduce server side logs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7933) [atsv2 read acls] Add TimelineWriter#writeDomain

2018-05-03 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16461968#comment-16461968
 ] 

genericqa commented on YARN-7933:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
5s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 42m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 18s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
1s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-client in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
45s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
|