[jira] [Commented] (YARN-6133) [ATSv2 Security] Renew delegation token for app automatically if an app collector is active

2017-08-09 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121104#comment-16121104
 ] 

Rohith Sharma K S commented on YARN-6133:
-

committed to YARN-5355 branch. since this is depends on YARN-6130, I haven not 
committed to YARN-5355_brach2  yet.

> [ATSv2 Security] Renew delegation token for app automatically if an app 
> collector is active
> ---
>
> Key: YARN-6133
> URL: https://issues.apache.org/jira/browse/YARN-6133
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6133-YARN-5355.01.patch, 
> YARN-6133-YARN-5355.02.patch, YARN-6133-YARN-5355.03.patch, 
> YARN-6133-YARN-5355.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-08-09 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121103#comment-16121103
 ] 

Rohith Sharma K S commented on YARN-6130:
-

[~varun_saxena] some of the java docs are failing for branch-2. Would you fix 
those? The JIRA YARN-6133 need to be applied on top of it. 

> [ATSv2 Security] Generate a delegation token for AM when app collector is 
> created and pass it to AM via NM and RM
> -
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6130-YARN-5355.01.patch, 
> YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch, 
> YARN-6130-YARN-5355.04.patch, YARN-6130-YARN-5355.05.patch, 
> YARN-6130-YARN-5355.06.patch, YARN-6130-YARN-5355_branch2.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6323) Rolling upgrade/config change is broken on timeline v2.

2017-08-09 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121098#comment-16121098
 ] 

Rohith Sharma K S commented on YARN-6323:
-

Ahh.. I remember the discussion Vrushali. Thanks for pointed out.  Thats true 
that NM start will fail if flow context is null. Current patch looks reasonable 
to me. I will take a detailed look at the patch. 

> Rolling upgrade/config change is broken on timeline v2. 
> 
>
> Key: YARN-6323
> URL: https://issues.apache.org/jira/browse/YARN-6323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6323.001.patch
>
>
> Found this issue when deploying on real clusters. If there are apps running 
> when we enable timeline v2 (with work preserving restart enabled), node 
> managers will fail to start due to missing app context data. We should 
> probably assign some default names to these "left over" apps. I believe it's 
> suboptimal to let users clean up the whole cluster before enabling timeline 
> v2. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5148) [YARN-3368] Add page to new YARN UI to view server side configurations/logs/JVM-metrics

2017-08-09 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121097#comment-16121097
 ] 

Sunil G commented on YARN-5148:
---

Thanks [~lewuathe], Sorry for delay here. I ll help to review this later today.

> [YARN-3368] Add page to new YARN UI to view server side 
> configurations/logs/JVM-metrics
> ---
>
> Key: YARN-5148
> URL: https://issues.apache.org/jira/browse/YARN-5148
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp, yarn-ui-v2
>Reporter: Wangda Tan
>Assignee: Kai Sasaki
>  Labels: oct16-medium
> Attachments: pretty-json-metrics.png, Screen Shot 2016-09-11 at 
> 23.28.31.png, Screen Shot 2016-09-13 at 22.27.00.png, 
> UsingStringifyPrint.png, YARN-5148.07.patch, YARN-5148.08.patch, 
> YARN-5148.09.patch, YARN-5148.10.patch, YARN-5148.11.patch, 
> YARN-5148.12.patch, YARN-5148.13.patch, YARN-5148-YARN-3368.01.patch, 
> YARN-5148-YARN-3368.02.patch, YARN-5148-YARN-3368.03.patch, 
> YARN-5148-YARN-3368.04.patch, YARN-5148-YARN-3368.05.patch, 
> YARN-5148-YARN-3368.06.patch, yarn-conf.png, yarn-tools.png
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5146) [YARN-3368] Supports Fair Scheduler in new YARN UI

2017-08-09 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121096#comment-16121096
 ] 

Sunil G commented on YARN-5146:
---

Sorry for last minute hiccup. I earlier tested this patch with a simple queue 
hierarchy and it was fine. Today I tested with a 5 level hierarchical queue and 
*Queue page is NOT loading*. Attaching error here. With out patch (existing 
trunk), same page loads fine. This is tested with capacity scheduler. 
[~ayousufi] please help to check this issue.

{noformat}
TypeError: Cannot read property 'split' of undefined
at Class.normalizeSingleResponse (capacity-queue.js:55)
at Class.superWrapper [as normalizeSingleResponse] (ember.debug.js:22066)
at Class.handleQueue (capacity-queue.js:73)
at Class.handleQueue (capacity-queue.js:82)
at Class.normalizeArrayResponse (capacity-queue.js:97)
at Class.normalizeQueryResponse (json-serializer.js:313)
at Class.normalizeResponse (json-serializer.js:215)
at ember$data$lib$system$store$serializer$response$$normalizeResponseHelper 
(serializer-response.js:82)
at finders.js:155
at Backburner.run (ember.debug.js:681)
ember.debug.js:30877 TypeError: Cannot read property '0' of undefined
at Class.error (application.js:13)
at Router.triggerEvent (ember.debug.js:27476)
at Object.trigger (ember.debug.js:51925)
at Transition.trigger (ember.debug.js:51739)
at ember.debug.js:51559
at tryCatch (ember.debug.js:52258)
at invokeCallback (ember.debug.js:52273)
at publish (ember.debug.js:52241)
at publishRejection (ember.debug.js:52176)
at ember.debug.js:30835
onerrorDefault @ ember.debug.js:30877
{noformat}

> [YARN-3368] Supports Fair Scheduler in new YARN UI
> --
>
> Key: YARN-5146
> URL: https://issues.apache.org/jira/browse/YARN-5146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Abdullah Yousufi
> Attachments: YARN-5146.001.patch, YARN-5146.002.patch, 
> YARN-5146.003.patch, YARN-5146.004.patch
>
>
> Current implementation in branch YARN-3368 only support capacity scheduler,  
> we want to make it support fair scheduler. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-09 Thread Yuqi Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121095#comment-16121095
 ] 

Yuqi Wang commented on YARN-6959:
-

I meant heartbeats from Step0 is blocked between MARK1 and MARK3 (i.e. blocked 
until Step3. RM switched to the new attempt.).
So, it may be blocked in MARK2, or may be blocked in some other places between 
MARK1 and MARK3.

And the RPC time before MARK1 cannot be ignored, and it can run parallel with 
the process (AM container completes -> NM reports to RM -> RM process a series 
of events).

I have not figure out which account for the largest time yet.
However, anyway, there is a race condition.

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6133) [ATSv2 Security] Renew delegation token for app automatically if an app collector is active

2017-08-09 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121091#comment-16121091
 ] 

Rohith Sharma K S commented on YARN-6133:
-

committing shortly

> [ATSv2 Security] Renew delegation token for app automatically if an app 
> collector is active
> ---
>
> Key: YARN-6133
> URL: https://issues.apache.org/jira/browse/YARN-6133
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6133-YARN-5355.01.patch, 
> YARN-6133-YARN-5355.02.patch, YARN-6133-YARN-5355.03.patch, 
> YARN-6133-YARN-5355.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5978) ContainerScheduler and Container state machine changes to support ExecType update

2017-08-09 Thread kartheek muthyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kartheek muthyala updated YARN-5978:

Attachment: YARN-5978.001.patch

Submitting the initial version of the patch to figure out javac, findbugs, 
Javadoc and other issues

> ContainerScheduler and Container state machine changes to support ExecType 
> update
> -
>
> Key: YARN-5978
> URL: https://issues.apache.org/jira/browse/YARN-5978
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
> Attachments: YARN-5978.001.patch
>
>
> ContainerScheduler should support updateContainer API for
> - Container Resource update
> - ExecType update that can change an opportunistic to guaranteed and 
> vice-versa
> Adding a new ContainerState event, UpdateContainerStateEvent to support 
> UPDATE_CONTAINER call from RM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6133) [ATSv2 Security] Renew delegation token for app automatically if an app collector is active

2017-08-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121084#comment-16121084
 ] 

Jian He commented on YARN-6133:
---

patch lgtm

> [ATSv2 Security] Renew delegation token for app automatically if an app 
> collector is active
> ---
>
> Key: YARN-6133
> URL: https://issues.apache.org/jira/browse/YARN-6133
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6133-YARN-5355.01.patch, 
> YARN-6133-YARN-5355.02.patch, YARN-6133-YARN-5355.03.patch, 
> YARN-6133-YARN-5355.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121079#comment-16121079
 ] 

Jian He commented on YARN-6959:
---

Do you mean step0 is blocked on MARK2 until the this entire process(AM 
container completes -> NM reports to RM -> RM process a series of events -> and 
finally a new Attempt gets added in scheduler) is completed? 
Question is why is step0 be blocked for so long ? there's no contention to grab 
the lock if I understand correctly. 

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6874) Supplement timestamp for min start/max end time columns in flow run table to avoid overwrite

2017-08-09 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-6874:
---
Summary: Supplement timestamp for min start/max end time columns in flow 
run table to avoid overwrite  (was: 
TestHBaseStorageFlowRun.testWriteFlowRunMinMax fails intermittently)

> Supplement timestamp for min start/max end time columns in flow run table to 
> avoid overwrite
> 
>
> Key: YARN-6874
> URL: https://issues.apache.org/jira/browse/YARN-6874
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Vrushali C
> Attachments: YARN-6874-YARN-5355.0001.patch
>
>
> {noformat}
> testWriteFlowRunMinMax(org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun)
>   Time elapsed: 0.088 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<142502690> but was:<1425026901000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun.testWriteFlowRunMinMax(TestHBaseStorageFlowRun.java:237)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6874) Supplement timestamp for min start/max end time columns in flow run table to avoid overwrite

2017-08-09 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121077#comment-16121077
 ] 

Varun Saxena commented on YARN-6874:


+1.
Will commit it shortly.

> Supplement timestamp for min start/max end time columns in flow run table to 
> avoid overwrite
> 
>
> Key: YARN-6874
> URL: https://issues.apache.org/jira/browse/YARN-6874
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Vrushali C
> Attachments: YARN-6874-YARN-5355.0001.patch
>
>
> {noformat}
> testWriteFlowRunMinMax(org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun)
>   Time elapsed: 0.088 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<142502690> but was:<1425026901000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun.testWriteFlowRunMinMax(TestHBaseStorageFlowRun.java:237)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-09 Thread Yuqi Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121065#comment-16121065
 ] 

Yuqi Wang commented on YARN-6959:
-

Basically, I meant that the allocate RPC call which is sent before AM process 
exited, caused this issue.
[~jianhe], could you please reconsider it.

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5978) ContainerScheduler and Container state machine changes to support ExecType update

2017-08-09 Thread kartheek muthyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kartheek muthyala updated YARN-5978:

Description: 
ContainerScheduler should support updateContainer API for
- Container Resource update
- ExecType update that can change an opportunistic to guaranteed and vice-versa


Adding a new ContainerState event, UpdateContainerStateEvent to support 
UPDATE_CONTAINER call from RM.


> ContainerScheduler and Container state machine changes to support ExecType 
> update
> -
>
> Key: YARN-5978
> URL: https://issues.apache.org/jira/browse/YARN-5978
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
>
> ContainerScheduler should support updateContainer API for
> - Container Resource update
> - ExecType update that can change an opportunistic to guaranteed and 
> vice-versa
> Adding a new ContainerState event, UpdateContainerStateEvent to support 
> UPDATE_CONTAINER call from RM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-09 Thread Yuqi Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121060#comment-16121060
 ] 

Yuqi Wang edited comment on YARN-6959 at 8/10/17 5:10 AM:
--

[~jianhe]
The whole pipeline was that:
Step0. AM sent heartbeats to RM.
Step1. AM process crashed with exitcode 15 without unregister to RM.
Step2-a. The heartbeats sent in step0, was processing by RM between MARK1 and 
MARK3.
Step2-b. NM told RM the AM container has completed.
Step3. RM switched to the new attempt.
Step4. RM recorded requests in the heartbeats from previous AM into current 
attempt.

So, it is possible.



was (Author: yqwang):
[~jianhe]
The whole pipeline was that:
Step0. AM sent heartbeats to RM.
Step1. AM process crashed with exitcode 15 without unregister to RM.
Step2-a. NM told RM the AM container has completed.
Step2-b. The heartbeats sent in step0, was processing by RM between MARK1 and 
MARK3.
Step3. RM switched to the new attempt.
Step4. RM recorded requests in the heartbeats from previous AM into current 
attempt.

So, it is possible.


> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-09 Thread Yuqi Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121060#comment-16121060
 ] 

Yuqi Wang edited comment on YARN-6959 at 8/10/17 5:08 AM:
--

[~jianhe]
The whole pipeline was that:
Step0. AM sent heartbeats to RM.
Step1. AM process crashed with exitcode 15 without unregister to RM.
Step2-a. NM told RM the AM container has completed.
Step2-b. The heartbeats sent in step0, was processing by RM between MARK1 and 
MARK3.
Step3. RM switched to the new attempt.
Step4. RM recorded requests in the heartbeats from previous AM into current 
attempt.

So, it is possible.



was (Author: yqwang):
[~jianhe]
The whole pipeline was that:
Step0. AM sent heartbeats to RM.
Step1. AM process crashed with exitcode 15 without unregister to RM.
Step2-a. NM told RM the AM container has completed.
Step2-b. The heartbeats sent in step0, was processing by RM between MARK1 and 
MARK3.
Step3. RM switched to the new attempt.
Step4. The heartbeats record request from previous AM into current attempt.

So, it is possible.


> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-09 Thread Yuqi Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121060#comment-16121060
 ] 

Yuqi Wang commented on YARN-6959:
-

[~jianhe]
The whole pipeline was that:
Step0. AM sent heartbeats to RM.
Step1. AM process crashed with exitcode 15 without unregister to RM.
Step2-a. NM told RM the AM container has completed.
Step2-b. The heartbeats sent in step0, was processing by RM between MARK1 and 
MARK3.
Step3. RM switched to the new attempt.
Step4. The heartbeats record request from previous AM into current attempt.

So, it is possible.


> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6969) Remove method getMinShareMemoryFraction and getPendingContainers in class FairSchedulerQueueInfo

2017-08-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121054#comment-16121054
 ] 

ASF GitHub Bot commented on YARN-6969:
--

GitHub user LarryLo opened a pull request:

https://github.com/apache/hadoop/pull/260

YARN-6969. Remove method getMinShareMemoryFraction and getPendingCont…

…ainers in class FairSchedulerQueueInfo

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/LarryLo/hadoop trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/260.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #260


commit c01804c096265d2f9922ec3a159a08718c91214e
Author: Larry 
Date:   2017-08-10T04:53:28Z

YARN-6969. Remove method getMinShareMemoryFraction and getPendingContainers 
in class FairSchedulerQueueInfo




> Remove method getMinShareMemoryFraction and getPendingContainers in class 
> FairSchedulerQueueInfo
> 
>
> Key: YARN-6969
> URL: https://issues.apache.org/jira/browse/YARN-6969
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: fairscheduler
>Reporter: Yufei Gu
>Priority: Trivial
>  Labels: newbie++
>
> They are not used anymore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121050#comment-16121050
 ] 

Jian He commented on YARN-6959:
---

ok, the first AM container process exited then, it's impossible for it to call 
allocate again. I guess the root cause is different. 

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6515) Fix warnings from Spotbugs in hadoop-yarn-server-nodemanager

2017-08-09 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121046#comment-16121046
 ] 

Naganarasimha G R commented on YARN-6515:
-

Thanks for the review and commit [~ajisakaa] and reviews from [~cheersyang], 
[~miklos.szeg...@cloudera.com], [~shaneku...@gmail.com]

> Fix warnings from Spotbugs in hadoop-yarn-server-nodemanager
> 
>
> Key: YARN-6515
> URL: https://issues.apache.org/jira/browse/YARN-6515
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6515.001.patch, YARN-6515.002.patch
>
>
> 5 find bugs issue was reported NM project as part of the YARN-4166 [build| 
> https://builds.apache.org/job/PreCommit-YARN-Build/15694/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html]
> Issue 1: 
>   
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
>  is a mutable collection which should be package protected
> Bug type MS_MUTABLE_COLLECTION_PKGPROTECT (click for details) 
> In class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics
> Field 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
> At ContainerMetrics.java:\[line 134\]
> Issue 2:
>   
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
>  makes inefficient use of keySet iterator instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer
> In method 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
> Field 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.pendingResources
> At ContainerLocalizer.java:\[line 334\]
> Issue 3: 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
>  makes inefficient use of keySet iterator instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl
> In method 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
> Field 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.recentlyStoppedContainers
> At NodeStatusUpdaterImpl.java:\[line 721\]
> Issue 4: 
> Hard coded reference to an absolute pathname in 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
> Bug type DMI_HARDCODED_ABSOLUTE_FILENAME (click for details) 
> In class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime
> In method 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
> File name /sys/fs/cgroup
> At DockerLinuxContainerRuntime.java:\[line 455\]
> Useless object stored in variable removedNullContainers of method 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
> Bug type UC_USELESS_OBJECT (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl
> In method 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
> Value removedNullContainers
> Type java.util.HashSet
> At NodeStatusUpdaterImpl.java:\[line 644\]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-09 Thread Yuqi Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121030#comment-16121030
 ] 

Yuqi Wang commented on YARN-6959:
-


{code:java}
2017-07-31 21:29:34,047 INFO [Container Monitor] 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
 Memory usage of ProcessTree container_e71_1500967702061_2512_01_01 for 
container-id container_e71_1500967702061_2512_01_01: 7.1 GB of 20 GB 
physical memory used; 8.5 GB of 30 GB virtual memory used
2017-07-31 21:29:37,423 INFO [Container Monitor] 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
 Memory usage of ProcessTree container_e71_1500967702061_2512_01_01 for 
container-id container_e71_1500967702061_2512_01_01: 7.1 GB of 20 GB 
physical memory used; 8.5 GB of 30 GB virtual memory used
2017-07-31 21:29:38,239 WARN [ContainersLauncher #60] 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code 
from container container_e71_1500967702061_2512_01_01 is : 15
2017-07-31 21:29:38,239 WARN [ContainersLauncher #60] 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exception 
from container-launch with container ID: 
container_e71_1500967702061_2512_01_01 and exit code: 15
ExitCodeException exitCode=15: 
at org.apache.hadoop.util.Shell.runCommand(Shell.java:579)
at org.apache.hadoop.util.Shell.run(Shell.java:490)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:756)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:329)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:86)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2017-07-31 21:29:38,239 INFO [ContainersLauncher #60] 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: Exception from 
container-launch.
Container id: container_e71_1500967702061_2512_01_01
Exit code: 15
Stack trace: ExitCodeException exitCode=15: 
2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:  at 
org.apache.hadoop.util.Shell.runCommand(Shell.java:579)
2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:  at 
org.apache.hadoop.util.Shell.run(Shell.java:490)
2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:  at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:756)
2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:  at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:  at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:329)
2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:  at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:86)
2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:  at 
java.lang.Thread.run(Thread.java:745)
2017-07-31 21:29:38,240 INFO [ContainersLauncher #60] 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: 

2017-07-31 21:29:38,241 WARN [ContainersLauncher #60] 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch:
 Container exited with a non-zero exit code 15
2017-07-31 21:29:38,241 INFO [AsyncDispatcher event handler] 

[jira] [Comment Edited] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-09 Thread Yuqi Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121029#comment-16121029
 ] 

Yuqi Wang edited comment on YARN-6959 at 8/10/17 4:12 AM:
--

Attach NM log for this bug.

{code:java}
YARN-6959.yarn_nm.log.zip
{code}



was (Author: yqwang):
Add NM log for this issue.

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-09 Thread Yuqi Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Wang updated YARN-6959:

Attachment: YARN-6959.yarn_nm.log.zip

Add NM log for this issue.

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6872) Ensure apps could run given NodeLabels are disabled post RM switchover/restart

2017-08-09 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121027#comment-16121027
 ] 

Junping Du commented on YARN-6872:
--

I have backport the commit to branch-2.8.2.

> Ensure apps could run given NodeLabels are disabled post RM switchover/restart
> --
>
> Key: YARN-6872
> URL: https://issues.apache.org/jira/browse/YARN-6872
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: YARN-6872.001.patch, YARN-6872.002.patch, 
> YARN-6872.003.patch, YARN-6872-addendum.001.patch
>
>
> Post YARN-6031, few apps could be failed during recovery provided they had 
> some label requirements for AM and labels were disable post RM 
> restart/switchover. As discussed in YARN-6031, its better to run such apps as 
> it may be long running apps as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6259) Support pagination and optimize data transfer with zero-copy approach for containerlogs REST API in NMWebServices

2017-08-09 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121019#comment-16121019
 ] 

Junping Du commented on YARN-6259:
--

Thanks for the patch, [~Tao Yang]! It looks like we have performance 
improvement here with mixing a new requirement of pagination. Can we split the 
patch into two different parts? I believe no argument on performance gains and 
we can have separated discussion on pagination requirement. Make sense?

> Support pagination and optimize data transfer with zero-copy approach for 
> containerlogs REST API in NMWebServices
> -
>
> Key: YARN-6259
> URL: https://issues.apache.org/jira/browse/YARN-6259
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.8.1
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-6259.001.patch
>
>
> Currently containerlogs REST API in NMWebServices will read and send the 
> entire content of container logs. Most of container logs are large and it's 
> useful to support pagination.
> * Add pagesize and pageindex parameters for containerlogs REST API
> {code}
> URL: http:///ws/v1/node/containerlogs//
> QueryParams:
>   pagesize - max bytes of one page , default 1MB
>   pageindex - index of required page, default 0, can be nagative(set -1 will 
> get the last page content)
> {code}
> * Add containerlogs-info REST API since sometimes we need to know the 
> totalSize/pageSize/pageCount info of log 
> {code}
> URL: 
> http:///ws/v1/node/containerlogs-info//
> QueryParams:
>   pagesize - max bytes of one page , default 1MB
> Response example:
>   {"logInfo":{"totalSize":2497280,"pageSize":1048576,"pageCount":3}}
> {code}
> Moreover, the data transfer pipeline (disk --> read buffer --> NM buffer --> 
> socket buffer) can be optimized to pipeline(disk --> read buffer --> socket 
> buffer) with zero-copy approach.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5536) Multiple format support (JSON, etc.) for exclude node file in NM graceful decommission with timeout

2017-08-09 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated YARN-5536:
--
Priority: Critical  (was: Major)

Moved the priority to Critical based on the following discussion with [~djp]. 
YARN-4676 added the timeout config support using the existing host file format. 
This isn't desirable given the existing format isn't suitable for arbitrary 
properties. So before we release 2.9, let's remove timeout config support from 
the existing format; there won't be any backward compatibility issue given 
YARN-4676 hasn't been released yet. That means if people want to use graceful 
decommission with timeout, they have to use the new json format and that is an 
acceptable requirement.


> Multiple format support (JSON, etc.) for exclude node file in NM graceful 
> decommission with timeout
> ---
>
> Key: YARN-5536
> URL: https://issues.apache.org/jira/browse/YARN-5536
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful
>Reporter: Junping Du
>Priority: Critical
>
> Per discussion in YARN-4676, we agree that multiple format (other than xml) 
> should be supported to decommission nodes with timeout values.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6471) Support to add min/max resource configuration for a queue

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120962#comment-16120962
 ] 

Hadoop QA commented on YARN-6471:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 56s{color} | {color:orange} root: The patch generated 64 new + 1980 
unchanged - 34 fixed = 2044 total (was 2014) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 14s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
34s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 47m 52s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}180m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.sftp.TestSFTPFileSystem |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
|   | org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6471 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881105/YARN-6471.008.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0042fef150c6 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ac7d060 |
| Default Java | 1.8.0_131 |
| findbugs | 

[jira] [Comment Edited] (YARN-6881) LOG is unused in AllocationConfiguration

2017-08-09 Thread weiyuan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120929#comment-16120929
 ] 

weiyuan edited comment on YARN-6881 at 8/10/17 1:50 AM:


[~templedf] The patch is attached and submitted. Please review it. I think the 
unit test failures are unrelated to this change. Thank you!


was (Author: v123582):
[~templedf] The patch is attached and submitted. Please review it. I think the 
unit test failures are unrelated to this change.
Thank you!

> LOG is unused in AllocationConfiguration
> 
>
> Key: YARN-6881
> URL: https://issues.apache.org/jira/browse/YARN-6881
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: weiyuan
>  Labels: newbie
> Attachments: YARN-6881.001.patch
>
>
> The variable can be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6881) LOG is unused in AllocationConfiguration

2017-08-09 Thread weiyuan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120929#comment-16120929
 ] 

weiyuan edited comment on YARN-6881 at 8/10/17 1:49 AM:


[~templedf] The patch is attached and submitted. Please review it. I think the 
unit test failures are unrelated to this change.
Thank you!


was (Author: v123582):
[~templedf] The patch is attached and submitted. Please review it. I think the 
unit test failures are unrelated to this change.


> LOG is unused in AllocationConfiguration
> 
>
> Key: YARN-6881
> URL: https://issues.apache.org/jira/browse/YARN-6881
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: weiyuan
>  Labels: newbie
> Attachments: YARN-6881.001.patch
>
>
> The variable can be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6881) LOG is unused in AllocationConfiguration

2017-08-09 Thread weiyuan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120929#comment-16120929
 ] 

weiyuan commented on YARN-6881:
---

[~templedf] The patch is attached and submitted. Please review it. I think the 
unit test failures are unrelated to this change.


> LOG is unused in AllocationConfiguration
> 
>
> Key: YARN-6881
> URL: https://issues.apache.org/jira/browse/YARN-6881
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: weiyuan
>  Labels: newbie
> Attachments: YARN-6881.001.patch
>
>
> The variable can be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6903) Yarn-native-service framework core rewrite

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120879#comment-16120879
 ] 

Hadoop QA commented on YARN-6903:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 61 new or modified test 
files. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
12s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
23s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
35s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
57s{color} | {color:green} yarn-native-services passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
42s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
11s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  3m  
6s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-yarn-slider in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-yarn-slider-core in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-yarn-services-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m 
10s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  3m 10s{color} | 
{color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  3m 10s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 23s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 518 new + 1523 unchanged - 403 fixed = 2041 total (was 1926) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  3m 
12s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-yarn-slider in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-yarn-slider-core in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-yarn-services-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
24s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
11s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 26 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
2s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  

[jira] [Updated] (YARN-6413) Decouple Yarn Registry API from ZK

2017-08-09 Thread Ellen Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ellen Hui updated YARN-6413:

Attachment: 0004-Registry-API-api-stubbed.patch

Hi [~jianhe], I've added some stubs, and a ServiceRecordKey implementation that 
just wraps the ZK path for the ZK implementation to use. The client can use it 
however they want, though I think it should be standardized one way or another.

My thought is this: the ZK implementation will implement RegistryOperations, 
RegistryStoreProtocol, and RegistryListenerProtocol. The RegistryOperations 
methods will be stubbed out, so anything that relies on them will still 
compile, and yarn-native-services will compile. After yarn-native-services 
moves over to to actually use the new APIs, RegistryOperations can then be 
fully deprecated.

I want to confirm with you that the methods in RegistryStoreProtocol and 
RegistryListenerProtocol, if I implement them and remove RegistryOperations 
entirely, will be ok for yarn-native-services. 

> Decouple Yarn Registry API from ZK
> --
>
> Key: YARN-6413
> URL: https://issues.apache.org/jira/browse/YARN-6413
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: amrmproxy, api, resourcemanager
>Reporter: Ellen Hui
>Assignee: Ellen Hui
> Attachments: 0001-Registry-API-v2.patch, 0002-Registry-API-v2.patch, 
> 0003-Registry-API-api-only.patch, 0004-Registry-API-api-stubbed.patch
>
>
> Right now the Yarn Registry API (defined in the RegistryOperations interface) 
> is a very thin layer over Zookeeper. This jira proposes changing the 
> interface to abstract away the implementation details so that we can write a 
> FS-based implementation of the registry service, which will be used to 
> support AMRMProxy HA.
> The new interface will use register/delete/resolve APIs instead of 
> Zookeeper-specific operations like mknode. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6896) Federation: routing REST invocations transparently to multiple RMs (part 1 - basic execution)

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120850#comment-16120850
 ] 

Hadoop QA commented on YARN-6896:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6896 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881099/YARN-6896.v2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8a343509e6bf 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ac7d060 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16813/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16813/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Federation: routing REST invocations transparently to multiple RMs (part 1 - 
> basic execution)
> 

[jira] [Commented] (YARN-6852) [YARN-6223] Native code changes to support isolate GPU devices by using CGroups

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120841#comment-16120841
 ] 

Hadoop QA commented on YARN-6852:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 30s{color} | 
{color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 30s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 29s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6852 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881065/YARN-6852.005.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 67fe8215c245 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ac7d060 |
| Default Java | 1.8.0_131 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/16814/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| cc | 
https://builds.apache.org/job/PreCommit-YARN-Build/16814/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/16814/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16814/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16814/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16814/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-6223] Native code changes to support isolate GPU devices by using 
> CGroups
> ---
>
> Key: YARN-6852
> URL: https://issues.apache.org/jira/browse/YARN-6852
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6852.001.patch, 

[jira] [Updated] (YARN-6471) Support to add min/max resource configuration for a queue

2017-08-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6471:
-
Attachment: YARN-6471.008.patch

Thanks [~sunil.gov...@gmail.com], the latest patch LGTM, I will commit to 
YARN-5881 branch if nobody against. 

Attached ver.008 patch, rebased to latest trunk. And Sunil could you provide 
sample configs so people can try it if they are interested 

> Support to add min/max resource configuration for a queue
> -
>
> Key: YARN-6471
> URL: https://issues.apache.org/jira/browse/YARN-6471
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-6471.001.patch, YARN-6471.002.patch, 
> YARN-6471.003.patch, YARN-6471.004.patch, YARN-6471.005.patch, 
> YARN-6471.006.patch, YARN-6471.007.patch, YARN-6471.008.patch, 
> YARN-6471-YARN-5881.001.patch, YARN-6471-YARN-5881.002.patch, 
> YARN-6471-YARN-5881.003.patch
>
>
> This jira will track the new configurations which are needed to configure min 
> resource and max resource of various resource types in a queue.
> For eg: 
> {noformat}
> yarn.scheduler.capacity.root.default.memory.min-resource
> yarn.scheduler.capacity.root.default.memory.max-resource
> yarn.scheduler.capacity.root.default.vcores.min-resource
> yarn.scheduler.capacity.root.default.vcores.max-resource
> {noformat}
> Uploading a patch soon



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6896) Federation: routing REST invocations transparently to multiple RMs (part 1 - basic execution)

2017-08-09 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120828#comment-16120828
 ] 

Giovanni Matteo Fumarola commented on YARN-6896:


Thanks [~curino] for the feedback. I fixed the Yetus warnings.
1. I reused the same configuration to have the same behavior indifferently if 
the client is using RPC or REST. I can rename it or create a new one.
2. I updated that code. I created 2 new methods: create and get, and updated 
the current one with getOrCreate.
3. Updated by using the same blacklisting style of submitApp.
4. I kept in the description as a single javadoc block to avoid to be schematic.
5. Good point. If the application failed to be submitted after n retries, the 
tuple remains in the FederationStateStore. If the Client wants to resubmit the 
same application as its own retry logic, the Router will start from the current 
saved SubCluster in the StateStore. It could happen that the last SubCluster 
was down during the submission for failover.
However, that tuple in the StateStore is harmless. In future the application 
cleaner service we are implementing as part of YARN-6648 will clean up these 
tuples.
6. Good catch. Added also unit tests for that.
7. I implemented only the methods for the basic execution of an application - 
CreateNewApplication, SubmitApplication, GetAppReport, KillApplication. I 
updated the title to avoid confusion. We will add the other methods as part of 
YARN-6740.
8. I checked the code and we can save the part where we create the mock 
Federation subcluster in 2 test classes - few lines. However, moving that part 
will not optimize the code size since we are creating a new Util for these few 
lines. If the number of test classes will increase, we will move that piece of 
code and reuse it.
9. Removed one. I kept one to show that FederationInterceptorREST is 
independent from any "middle" interceptor.
10. Good point. I set the {{DefaultRequestInterceptorREST}} inside 
{{FederationInterceptorREST}} to be configurable. In this way in the future 
devs can add their own interceptors.


> Federation: routing REST invocations transparently to multiple RMs (part 1 - 
> basic execution)
> -
>
> Key: YARN-6896
> URL: https://issues.apache.org/jira/browse/YARN-6896
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-6896.proto.patch, YARN-6896.v1.patch, 
> YARN-6896.v2.patch
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> RMWebServicesProtocol requests to the appropriate RM(s) in a federated YARN 
> cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6896) Federation: routing REST invocations transparently to multiple RMs (part 1 - basic execution)

2017-08-09 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-6896:
---
Summary: Federation: routing REST invocations transparently to multiple RMs 
(part 1 - basic execution)  (was: Federation: routing REST invocations 
transparently to multiple RMs)

> Federation: routing REST invocations transparently to multiple RMs (part 1 - 
> basic execution)
> -
>
> Key: YARN-6896
> URL: https://issues.apache.org/jira/browse/YARN-6896
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-6896.proto.patch, YARN-6896.v1.patch, 
> YARN-6896.v2.patch
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> RMWebServicesProtocol requests to the appropriate RM(s) in a federated YARN 
> cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120793#comment-16120793
 ] 

Jian He commented on YARN-6959:
---

It's still unclear to me. For MARK2, once the lock is released, it can just 
proceed. 
{code}
  synchronized (lock) { // MARK2: The RPC call may be blocked here for a long 
time
...
// MARK3: During MARK1 and here, RM may switch to the new attempt. So, 
previous 
// attempt ResourceRequest may be recorded into current attempt 
ResourceRequests 
scheduler.allocate(attemptId, ask, ...) -> 
scheduler.getApplicationAttempt(attemptId)
...
  }
{code}
>From the log, I do see that the AM container size changed. Also, I see that 
>the first AM container completed at 
{code}
2017-07-31 21:29:38,338 INFO [ResourceManager Event Processor] 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
container_e71_1500967702061_2512_01_01 Container Transitioned from RUNNING 
to COMPLETED
{code}
if the AM container process had already exited, how is it possible to call 
allocate again. 
Can you check on NodeManager Log that if the first AM container indeed 
completed? 
Are you able to enable debug level log and reproduce this issue ? or reproduce 
the issue with a UT.

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6896) Federation: routing REST invocations transparently to multiple RMs

2017-08-09 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-6896:
---
Attachment: YARN-6896.v2.patch

> Federation: routing REST invocations transparently to multiple RMs
> --
>
> Key: YARN-6896
> URL: https://issues.apache.org/jira/browse/YARN-6896
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-6896.proto.patch, YARN-6896.v1.patch, 
> YARN-6896.v2.patch
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> RMWebServicesProtocol requests to the appropriate RM(s) in a federated YARN 
> cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120784#comment-16120784
 ] 

Hadoop QA commented on YARN-6820:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
18s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
32s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6820 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881092/YARN-6820-YARN-5355.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9ef63e0e3e2b 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 3088cfc |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16812/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 U: hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16812/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Restrict read access to timelineservice v2 data 
> 

[jira] [Created] (YARN-6979) Add flag to allow all container updates to be initiated via NodeHeartbeatResponse

2017-08-09 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-6979:
-

 Summary: Add flag to allow all container updates to be initiated 
via NodeHeartbeatResponse
 Key: YARN-6979
 URL: https://issues.apache.org/jira/browse/YARN-6979
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun Suresh
Assignee: kartheek muthyala


Currently, only the Container Resource increase command is sent to the NM via 
NodeHeartbeat response. This JIRA proposes to add a flag in the RM to allow ALL 
container updates (increase, decrease, promote and demote) to initiated via 
node HB.

The AM is still free to use the ContainerManagementPrototol's 
{{updateContainer}} API in cases where for instance, the Node HB is frequency 
is very low and the AM needs to update the container as soon as possible. In 
these situations, if the Node HB arrives before the updateContainer API call, 
the call would error out, due to a version mismatch and the AM is required to 
handle it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6861) Reader API for sub application entities

2017-08-09 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120781#comment-16120781
 ] 

Vrushali C commented on YARN-6861:
--

Thanks for the patch [~rohithsharma]. 

Overall the patch looks good. 

I had some thoughts for discussion. 
- I am wondering if the rest api name should also indicate that this is not a 
regular entity but a sub app entity. For example, can we rename the API 
"/users/{userid}/entities/{entitytype}/{entityid}" to something like 
"/subappusers/{userid}/subappentities/{entitytype}/{entityid}"
- similarly for other apis.
- instead of doAsUser, wondering we should name it subAppUser? 


> Reader API for sub application entities
> ---
>
> Key: YARN-6861
> URL: https://issues.apache.org/jira/browse/YARN-6861
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-6861-YARN-5355.001.patch, 
> YARN-6861-YARN-5355.002.patch
>
>
> YARN-6733 and YARN-6734 writes data into sub application table. There should 
> be a way to read those entities.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6978) Add updateContainer API to NMClient.

2017-08-09 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-6978:
-

 Summary: Add updateContainer API to NMClient.
 Key: YARN-6978
 URL: https://issues.apache.org/jira/browse/YARN-6978
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun Suresh
Assignee: kartheek muthyala


This is to track the addition of updateContainer API to the {{NMClient}} and 
{{NMClientAsync}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6974) Make CuratorBasedElectorService the default

2017-08-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120759#comment-16120759
 ] 

Jian He commented on YARN-6974:
---

[~rkanter], I tried but I havn't tested it in scale.  

> Make CuratorBasedElectorService the default
> ---
>
> Key: YARN-6974
> URL: https://issues.apache.org/jira/browse/YARN-6974
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0-beta1
>Reporter: Robert Kanter
>Priority: Critical
>
> YARN-4438 (and cleanup in YARN-5709) added the 
> {{CuratorBasedElectorService}}, which does leader election via Curator.  The 
> intention was to leave it off by default to allow time for it to bake, and 
> eventually make it the default and remove the 
> {{ActiveStandbyElectorBasedElectorService}}.  
> We should do that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6323) Rolling upgrade/config change is broken on timeline v2.

2017-08-09 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120730#comment-16120730
 ] 

Vrushali C commented on YARN-6323:
--

So this jira YARN-6323 is not for data inconsistencies. It is for dealing with 
NM startup failure. If you bring up an NM with atsv2 enabled on a node which 
has an app that has been running from before atsv2 was turned on, then NM will 
not be able to recover the flow context for this app, since the flow context 
never existed before. 

Related jira was YARN-6555 in which [~rohithsharma] added the work preserving 
flow context storage and retrieval on the NM. 

To explain this jira a bit more:
In the patch on YARN-6555 
https://issues.apache.org/jira/secure/attachment/12869901/YARN-6555.003.patch

at line 386 in ContainerManagerImpl , if the p.getFlowContext() != null then we 
create the Flow Context correctly and pass it in as an argument to  
ApplicationImpl on line 393. But if it is null (when it does not exist), then 
null FlowContext will be passed to ApplicationImpl and ApplicationImpl 
constructor will throw new IllegalArgumentException("flow context cannot be 
null");



> Rolling upgrade/config change is broken on timeline v2. 
> 
>
> Key: YARN-6323
> URL: https://issues.apache.org/jira/browse/YARN-6323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6323.001.patch
>
>
> Found this issue when deploying on real clusters. If there are apps running 
> when we enable timeline v2 (with work preserving restart enabled), node 
> managers will fail to start due to missing app context data. We should 
> probably assign some default names to these "left over" apps. I believe it's 
> suboptimal to let users clean up the whole cluster before enabling timeline 
> v2. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6550) Capture launch_container.sh logs

2017-08-09 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120728#comment-16120728
 ] 

Allen Wittenauer commented on YARN-6550:


BTW, be aware that in sh, {} forces a subshell.  which means it's going to get 
interpreted first

> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6550) Capture launch_container.sh logs

2017-08-09 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120727#comment-16120727
 ] 

Allen Wittenauer commented on YARN-6550:


bq.  Also was executing this with "bash" explicitly so the #!/bin/bash wouldnt 
have affect it.

I'm aware.  I'm just pointing out that it's Yet Another Bug in the 
nodemanager's code.

Also, use your toolset:

{code}
$ shellcheck /tmp/container_launch.sh

In /tmp/container_launch.sh line 6:
{
^-- SC1009: The mentioned parser error was in this brace group.


In /tmp/container_launch.sh line 11:
partition (cd_education_status)
^-- SC1073: Couldn't parse this function.
   ^-- SC1065: Trying to declare parameters? Don't. Use () and refer to 
params as $1, $2..


In /tmp/container_launch.sh line 12:
select cd_demo_sk, cd_gender, "
^-- SC1064: Expected a { to open the function definition.
^-- SC1072:  Fix any mentioned problems and try again.

{code}

> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6736) Consider writing to both ats v1 & v2 from RM for smoother upgrades

2017-08-09 Thread Aaron Gresch (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120696#comment-16120696
 ] 

Aaron Gresch commented on YARN-6736:


Is this a dupe of YARN-4368?

We would like to run both services in parallel for a time.  Rather than having 
an "upgrade" mode, I think it would be cleaner to specify the versions in a 
list as mentioned.  I was working on a similar solution to create a publisher 
class that basically took a collection of timeline services to publish to based 
on the versions specified.  

I made a similar change locally and an issue I had was getting my single node 
setup running with both services.  ATS v1 and v2 wanted to use the same port.  
I ended up creating a new conf port setting for v2 that defaults back to the v1 
port if not found.



> Consider writing to both ats v1 & v2 from RM for smoother upgrades
> --
>
> Key: YARN-6736
> URL: https://issues.apache.org/jira/browse/YARN-6736
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Rohith Sharma K S
> Attachments: YARN-6736-YARN-5355.001.patch
>
>
> When the cluster is being upgraded from atsv1 to v2, it may be good to have a 
> brief time period during which RM writes to both atsv1 and v2. This will help 
> frameworks like Tez migrate more smoothly. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-09 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6820:
-
Attachment: YARN-6820-YARN-5355.005.patch

Uploading v005 that has the following changes as per review:

- Using empty string "" for initializing Admin ACL list if YARN_ADMIN_ACL is 
not set
- Using the Principal in HttpServletRequest to create the UGI instead of the 
remote user in the HttpServletRequest
- updated unit tests to conform to the above changes
- fixed the whitespace & javadoc warning in last jenkins report 


> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch, 
> YARN-6820-YARN-5355.004.patch, YARN-6820-YARN-5355.005.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-09 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120682#comment-16120682
 ] 

Vrushali C commented on YARN-6820:
--

Fixing the 1 whitespace and incorrect param name javadoc issue. 

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch, 
> YARN-6820-YARN-5355.004.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6905) Multiple test failures due to FastNumberFormat

2017-08-09 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120680#comment-16120680
 ] 

Vrushali C commented on YARN-6905:
--

So the ApplicationId.toString is being invoked in AppIdKeyConverter#decode 
function which is in the hadoop-yarn-server-timelineservice-hbase module. This 
module depends on hadoop-yarn-api as well as hadoop-common. So I think moving 
the FastNumberFormat from hadoop-common to  hadoop-yarn-api  may not help. 

I think timeline service would need to override the ApplicationId.toString 
internally. Or, although I think this won't be very likable, hadoop-yarn-api 
can provide a ApplicationId#toStringSlowImpl function (or some such named 
function) in ApplicationId itself which keeps the old code instead of the 
changes to ApplicationId #toString() in YARN-6768.

Unfortunately, we could see more classpath conflicts as trunk keeps evolving, 
till timeline service on trunk can be based on the hbase version which is based 
on hadoop trunk. 

> Multiple test failures due to FastNumberFormat
> --
>
> Key: YARN-6905
> URL: https://issues.apache.org/jira/browse/YARN-6905
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 3.0.0-beta1
> Environment: Ubuntu 14.04 
> x86, ppc64le
> $ java -version
> openjdk version "1.8.0_111"
> OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14)
> OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
>Reporter: Sonia Garudi
>Assignee: Haibo Chen
>
> There are multiple test failing in Hadoop YARN Timeline Service HBase tests 
> project with the following error :
> {code}
> java.lang.NoClassDefFoundError: org/apache/hadoop/util/FastNumberFormat
> at 
> org.apache.hadoop.yarn.api.records.ApplicationId.toString(ApplicationId.java:104)
> {code}
> Below are the failing tests :
> {code}
>   TestHBaseTimelineStorageApps.testWriteApplicationToHBase
>   TestHBaseTimelineStorageApps.testEvents
>   TestHBaseTimelineStorageEntities.testEventsEscapeTs
>   TestHBaseTimelineStorageEntities.testWriteEntityToHBase
>   TestHBaseTimelineStorageEntities.testEventsWithEmptyInfo
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6874) TestHBaseStorageFlowRun.testWriteFlowRunMinMax fails intermittently

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120673#comment-16120673
 ] 

Hadoop QA commented on YARN-6874:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
15s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6874 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881085/YARN-6874-YARN-5355.0001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 93b87cf6e718 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 3088cfc |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16811/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16811/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestHBaseStorageFlowRun.testWriteFlowRunMinMax fails intermittently
> ---
>
> Key: YARN-6874
> URL: https://issues.apache.org/jira/browse/YARN-6874
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun 

[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120671#comment-16120671
 ] 

Hadoop QA commented on YARN-6820:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 2s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
14s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-timelineservice in the patch 
failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6820 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881068/YARN-6820-YARN-5355.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 49ea4f1c0c93 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 3088cfc |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/16809/artifact/patchprocess/whitespace-eol.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/16809/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16809/testReport/ |
| modules | C: 

[jira] [Updated] (YARN-6977) Node information is not provided for non am containers in RM logs

2017-08-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6977:
-
Labels: newbie  (was: )

> Node information is not provided for non am containers in RM logs
> -
>
> Key: YARN-6977
> URL: https://issues.apache.org/jira/browse/YARN-6977
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Sumana Sathish
>  Labels: newbie
>
> There is no information on which node non am container is being assigned in 
> the trunk for hadoop 3.0
> Earlier we used to have logs for non am container in the similar way
> {code}
> Assigned container container_ of capacity  on host 
> , which has 1 containers,  used and 
>  available after allocation
> {code}
> 3.0 has information for am container alone in the following way
> {code}
> Done launching container Container: [ContainerId: container_, 
> AllocationRequestId: 0, Version: 0, NodeId:nodeID, NodeHttpAddress: 
> nodeAddress, Resource: , Priority: 0, Token: Token { 
> kind: ContainerToken, service: service}, ExecutionType: GUARANTEED, ] for AM 
> appattempt_
> {code}
> Can we please have similar message for Non am container too ??



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6977) Node information is not provided for non am containers in RM logs

2017-08-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6977:
-
Component/s: capacity scheduler

> Node information is not provided for non am containers in RM logs
> -
>
> Key: YARN-6977
> URL: https://issues.apache.org/jira/browse/YARN-6977
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Sumana Sathish
>  Labels: newbie
>
> There is no information on which node non am container is being assigned in 
> the trunk for hadoop 3.0
> Earlier we used to have logs for non am container in the similar way
> {code}
> Assigned container container_ of capacity  on host 
> , which has 1 containers,  used and 
>  available after allocation
> {code}
> 3.0 has information for am container alone in the following way
> {code}
> Done launching container Container: [ContainerId: container_, 
> AllocationRequestId: 0, Version: 0, NodeId:nodeID, NodeHttpAddress: 
> nodeAddress, Resource: , Priority: 0, Token: Token { 
> kind: ContainerToken, service: service}, ExecutionType: GUARANTEED, ] for AM 
> appattempt_
> {code}
> Can we please have similar message for Non am container too ??



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6977) Node information is not provided for non am containers in RM logs

2017-08-09 Thread Sumana Sathish (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sumana Sathish updated YARN-6977:
-
Description: 
There is no information on which node non am container is being assigned in the 
trunk for hadoop 3.0
Earlier we used to have logs for non am container in the similar way
{code}
Assigned container container_ of capacity  on host 
, which has 1 containers,  used and  available after allocation
{code}

3.0 has information for am container alone in the following way
{code}
Done launching container Container: [ContainerId: container_, 
AllocationRequestId: 0, Version: 0, NodeId:nodeID, NodeHttpAddress: 
nodeAddress, Resource: , Priority: 0, Token: Token { 
kind: ContainerToken, service: service}, ExecutionType: GUARANTEED, ] for AM 
appattempt_
{code}

Can we please have similar message for Non am container too ??

  was:
There is no information on which node non am container is being assigned in the 
trunk for 3.0
Earlier we used to have logs for non am container in the similar way
{code}
Assigned container container_ of capacity  on host 
, which has 1 containers,  used and  available after allocation
{code}

3.0 has information for am container alone in the following way
{code}
Done launching container Container: [ContainerId: container_, 
AllocationRequestId: 0, Version: 0, NodeId:nodeID, NodeHttpAddress: 
nodeAddress, Resource: , Priority: 0, Token: Token { 
kind: ContainerToken, service: service}, ExecutionType: GUARANTEED, ] for AM 
appattempt_
{code}

Can we please have similar message for Non am container too ??


> Node information is not provided for non am containers in RM logs
> -
>
> Key: YARN-6977
> URL: https://issues.apache.org/jira/browse/YARN-6977
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Sumana Sathish
>  Labels: newbie
>
> There is no information on which node non am container is being assigned in 
> the trunk for hadoop 3.0
> Earlier we used to have logs for non am container in the similar way
> {code}
> Assigned container container_ of capacity  on host 
> , which has 1 containers,  used and 
>  available after allocation
> {code}
> 3.0 has information for am container alone in the following way
> {code}
> Done launching container Container: [ContainerId: container_, 
> AllocationRequestId: 0, Version: 0, NodeId:nodeID, NodeHttpAddress: 
> nodeAddress, Resource: , Priority: 0, Token: Token { 
> kind: ContainerToken, service: service}, ExecutionType: GUARANTEED, ] for AM 
> appattempt_
> {code}
> Can we please have similar message for Non am container too ??



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6977) Node information is not provided for non am containers in RM logs

2017-08-09 Thread Sumana Sathish (JIRA)
Sumana Sathish created YARN-6977:


 Summary: Node information is not provided for non am containers in 
RM logs
 Key: YARN-6977
 URL: https://issues.apache.org/jira/browse/YARN-6977
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Sumana Sathish


There is no information on which node non am container is being assigned in the 
trunk for 3.0
Earlier we used to have logs for non am container in the similar way
{code}
Assigned container container_ of capacity  on host 
, which has 1 containers,  used and  available after allocation
{code}

3.0 has information for am container alone in the following way
{code}
Done launching container Container: [ContainerId: container_, 
AllocationRequestId: 0, Version: 0, NodeId:nodeID, NodeHttpAddress: 
nodeAddress, Resource: , Priority: 0, Token: Token { 
kind: ContainerToken, service: service}, ExecutionType: GUARANTEED, ] for AM 
appattempt_
{code}

Can we please have similar message for Non am container too ??



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6413) Decouple Yarn Registry API from ZK

2017-08-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120653#comment-16120653
 ] 

Jian He edited comment on YARN-6413 at 8/9/17 9:06 PM:
---

Looks like the ApplicationServiceRecordKey etc. is still using the appId, and 
ContainerServiceRecordKey is using ContainerId,  as said before, that will be 
in conflict with what exists today ? 
It's not clear to me how the service record interface will be used by the 
current code, will it be the same as previous patch ? 


was (Author: jianhe):
Looks like the ApplicationServiceRecordKey etc. is still using the appId,  as 
said before, that will be in conflict with what exists today ? 

> Decouple Yarn Registry API from ZK
> --
>
> Key: YARN-6413
> URL: https://issues.apache.org/jira/browse/YARN-6413
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: amrmproxy, api, resourcemanager
>Reporter: Ellen Hui
>Assignee: Ellen Hui
> Attachments: 0001-Registry-API-v2.patch, 0002-Registry-API-v2.patch, 
> 0003-Registry-API-api-only.patch
>
>
> Right now the Yarn Registry API (defined in the RegistryOperations interface) 
> is a very thin layer over Zookeeper. This jira proposes changing the 
> interface to abstract away the implementation details so that we can write a 
> FS-based implementation of the registry service, which will be used to 
> support AMRMProxy HA.
> The new interface will use register/delete/resolve APIs instead of 
> Zookeeper-specific operations like mknode. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6413) Decouple Yarn Registry API from ZK

2017-08-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120653#comment-16120653
 ] 

Jian He commented on YARN-6413:
---

Looks like the ApplicationServiceRecordKey etc. is still using the appId,  as 
said before, that will be in conflict with what exists today ? 

> Decouple Yarn Registry API from ZK
> --
>
> Key: YARN-6413
> URL: https://issues.apache.org/jira/browse/YARN-6413
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: amrmproxy, api, resourcemanager
>Reporter: Ellen Hui
>Assignee: Ellen Hui
> Attachments: 0001-Registry-API-v2.patch, 0002-Registry-API-v2.patch, 
> 0003-Registry-API-api-only.patch
>
>
> Right now the Yarn Registry API (defined in the RegistryOperations interface) 
> is a very thin layer over Zookeeper. This jira proposes changing the 
> interface to abstract away the implementation details so that we can write a 
> FS-based implementation of the registry service, which will be used to 
> support AMRMProxy HA.
> The new interface will use register/delete/resolve APIs instead of 
> Zookeeper-specific operations like mknode. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6874) TestHBaseStorageFlowRun.testWriteFlowRunMinMax fails intermittently

2017-08-09 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6874:
-
Attachment: YARN-6874-YARN-5355.0001.patch

Thanks [~varun_saxena]. yes I think if two writes happen within the same 
millisecond for the min start time, the second one will overwrite the other. 
Which is exactly why we are supplementing the timestamp for metric writes in 
the flow run table.

I am attaching a very simple patch that modifies the ColumnHelper constructor 
call to include "true" for the flag that indicates the use of supplemented 
timestamp while storing.

The effect of this will be that for columns min start time and max end time of 
the flow, the supplemented timestamp will be used correctly. It will also be 
invoked for the flow version column store, so the biggest timestamp value will 
be fetched when we query for the flow version. The effect is the same as 
different apps writing the flow version.

Uploading v001.


> TestHBaseStorageFlowRun.testWriteFlowRunMinMax fails intermittently
> ---
>
> Key: YARN-6874
> URL: https://issues.apache.org/jira/browse/YARN-6874
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Vrushali C
> Attachments: YARN-6874-YARN-5355.0001.patch
>
>
> {noformat}
> testWriteFlowRunMinMax(org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun)
>   Time elapsed: 0.088 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<142502690> but was:<1425026901000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun.testWriteFlowRunMinMax(TestHBaseStorageFlowRun.java:237)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6550) Capture launch_container.sh logs

2017-08-09 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120645#comment-16120645
 ] 

Suma Shivaprasad commented on YARN-6550:


Yes output is correct. You mean the curl braces between highlighted in bold 
below? That didnt matter. I was trying out different things and would have 
missed it. Also was executing this with "bash" explicitly so the #!/bin/bash 
wouldnt have affect it.

bash /tmp/unit_test_fail.sh
/tmp/unit_test_fail.sh: line 11: syntax error near unexpected token 
`cd_education_status'
/tmp/unit_test_fail.sh: line 11: `partition (cd_education_status)'

{noformat}
#!/bin/bash

export STDOUT="/tmp/1.out"
export STDERR="/tmp/1.err"

{
echo "Setting up env variables"
export 
APPLICATION_WORKFLOW_CONTEXT="*{*"workflowId":"609f91c5cd83","workflowName":"

insert table
partition (cd_education_status)
select cd_demo_sk, cd_gender, "
echo "Setting up job resources"
echo "Launching container"
} 1> >(tee -a "${STDOUT}" >&1) 2> >(tee -a "${STDERR}" >&2)
exec /bin/bash -c ""
hadoop_shell_errorcode=$?
if [[ "$hadoop_shell_errorcode" -ne 0 ]]
then
  exit $hadoop_shell_errorcode
fi
{noformat}

> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6789) new api to get all supported resources from RM

2017-08-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120636#comment-16120636
 ] 

Wangda Tan commented on YARN-6789:
--

After a offline discussion with [~sunilg], I think we discovered more issues 
for unit in Resource object. 

The "unit" creates several issues:
- Behavior of the branch is: if unit of a given resource information is not 
set, default unit which configured in resource-type.cfg will be used to 
initialize containers ({{Resource.newInstance}}); and unit will be untouched 
for PB record initialization. ({{ResourcePBImpl(ResourceProto proto)}})
- However, if we have AM runs with old code (which doesn't have YARN-3926 
logics), it will send resource PB record to RM without unit on the wire. So 
YARN RM thinks the coming memory value with empty UNIT (which is B). This is an 
incompatible behavior. 
- Secondly, as I commented above, "unit" inside ResourceTypeInfo is very 
confusing: a. it is not minimum unit. b. it is not default unit, since it won't 
affect "default unit" inside AM, it is just default unit inside RM, which AM 
should not be interested. c. it is not a "suggested/preferred unit", because it 
doesn't make sense as well.
- In addition, It creates performance issue as well since all Resource 
operations need convert to the same unit.

My personal preference is completely removing unit from ResourceInformation. 
And unit of ResourceType means unit of given resource type. For example, 
resource.types.memory.unit = MB. It will be majorly used for UI displaying. 
Units of known resource types including vcores/memory will be hard coded in 
code and cannot changed by setting configuration file, this is majorly for 
backward-compatibility. We can provide unit converter as client library for 
AM/Client to use, Resource-related classes should not directly use it.

Thoughts?

> new api to get all supported resources from RM
> --
>
> Key: YARN-6789
> URL: https://issues.apache.org/jira/browse/YARN-6789
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-6789-YARN-3926.001.patch
>
>
> It will be better to provide an api to get all supported resource types from 
> RM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6903) Yarn-native-service framework core rewrite

2017-08-09 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6903:
--
Attachment: YARN-6903.yarn-native-services.05.patch

> Yarn-native-service framework core rewrite
> --
>
> Key: YARN-6903
> URL: https://issues.apache.org/jira/browse/YARN-6903
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6903.yarn-native-services.01.patch, 
> YARN-6903.yarn-native-services.02.patch, 
> YARN-6903.yarn-native-services.03.patch, 
> YARN-6903.yarn-native-services.04.patch, 
> YARN-6903.yarn-native-services.05.patch
>
>
> There are some new features like rich placement scheduling, container auto 
> restart, container upgrade in YARN core that can be taken advantage by the 
> native-service framework. Besides, there are quite a lot legacy code which 
> are no longer required. 
> So we decide to rewrite the core part to have a leaner codebase and make use 
> of various advanced features in YARN. 
> And the new code design will be in align with what we have designed for the 
> service API YARN-4793



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-09 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6820:
-
Attachment: YARN-6820-YARN-5355.004.patch

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch, 
> YARN-6820-YARN-5355.004.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-09 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6820:
-
Attachment: (was: YARN-6820-YARN-5355.004.patch)

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-09 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6820:
-
Attachment: YARN-6820-YARN-5355.004.patch

Uploading v004. 

Updates are:
- Using empty string "" for initializing Admin ACL list if YARN_ADMIN_ACL is 
not set
- Using the Principal in HttpServletRequest to create the UGI instead of the 
remote user in the HttpServletRequest
- updated unit tests to conform to the above changes

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch, 
> YARN-6820-YARN-5355.004.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5736) YARN container executor config does not handle white space

2017-08-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120578#comment-16120578
 ] 

Wangda Tan commented on YARN-5736:
--

[~dan...@cloudera.com], [~miklos.szeg...@cloudera.com], 
[~shaneku...@gmail.com], 

I think this patch should be backport to branch-2 as well, is there any concern 
of doing this?

> YARN container executor config does not handle white space
> --
>
> Key: YARN-5736
> URL: https://issues.apache.org/jira/browse/YARN-5736
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Trivial
>  Labels: oct16-medium
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN_5736.000.patch, YARN-5736.001.patch, 
> YARN-5736.002.patch, YARN-5736.addendum.000.patch
>
>
> The container executor configuration reader does not handle white spaces or 
> malformed key value pairs in the config file correctly or gracefully
> as an example the following key value line which is part of the configuration 
> (note the << is used as a marker to show the extra trailing space):
> yarn.nodemanager.linux-container-executor.group=yarn <<
> is a valid line but when you run the check over the file:
> [root@test]#./container-executor --checksetup
> Can't get group information for yarn - Success.
> [root@test]#
> It fails to find the yarn group but it really tries to find the "yarn " group 
> which fails. There is no trimming anywhere while processing the lines. If a 
> space would be added in before or after the = sign a failure would also occur.
> Minor nit is the fact that a failure still is logged as a Success



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6550) Capture launch_container.sh logs

2017-08-09 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120571#comment-16120571
 ] 

Allen Wittenauer commented on YARN-6550:


are you sure your output is correct?  I'm seeing missing curly braces between 
the two.  That error would indicate that other quotes (or other bits) are 
missing too.

Also:

{code}
#!/bin/bash
{code}

Not portable.

{code}
if [ $hadoop_shell_errorcode -ne 0 ]
{code}

use [[ and quote the variable.


> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6852) [YARN-6223] Native code changes to support isolate GPU devices by using CGroups

2017-08-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6852:
-
Attachment: YARN-6852.005.patch

Thanks for comments from [~sunil.gov...@gmail.com], attached 005 patch, 
addressed all comments except #4, since it is a little bit out of the scope. I 
prefer to do it once we have more requirements of cgroup.

> [YARN-6223] Native code changes to support isolate GPU devices by using 
> CGroups
> ---
>
> Key: YARN-6852
> URL: https://issues.apache.org/jira/browse/YARN-6852
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6852.001.patch, YARN-6852.002.patch, 
> YARN-6852.003.patch, YARN-6852.004.patch, YARN-6852.005.patch
>
>
> This JIRA plan to add support of:
> 1) Isolation in CGroups. (native side).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6550) Capture launch_container.sh logs

2017-08-09 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120554#comment-16120554
 ] 

Suma Shivaprasad commented on YARN-6550:


[~aw] Thanks for checking on the bash version portability.  With the updated 
patch to use command groups, there is a UT failure that exposed some unexpected 

The UT - TestContainerLaunch.testInvalidSyntax checks for failures being 
propagated from a bunch of invalid commands through ShellCommandExecutor. 

Behaviour without the patch

Below is the script that the UT executes 

{noformat}
#!/bin/bash

export 
APPLICATION_WORKFLOW_CONTEXT="{"workflowId":"609f91c5cd83","workflowName":"

insert table
partition (cd_education_status)
select cd_demo_sk, cd_gender, "
exec /bin/bash -c ""
hadoop_shell_errorcode=$?
if [ $hadoop_shell_errorcode -ne 0 ]
then
  exit $hadoop_shell_errorcode
fi
{noformat}

Expected error

*/tmp/unit_test_pass.sh: line 5: insert: command not found*
/tmp/unit_test_pass.sh: line 6: syntax error near unexpected token 
`cd_education_status'
/tmp/unit_test_pass.sh: line 6: `partition (cd_education_status)'


Behaviour with the patch

Script that the UT executes with the patch (command groups)
--
{noformat}
{
echo "Setting up env variables"
export 
APPLICATION_WORKFLOW_CONTEXT=""workflowId":"609f91c5cd83","workflowName":"

insert table 
partition (cd_education_status)
select cd_demo_sk, cd_gender, "
echo "Setting up job resources"
echo "Launching container"
} 1> >(tee -a "${STDOUT}" >&1) 2> >(tee -a "${STDERR}" >&2)   # Note that 
redirection doesnt matter. Having a command group causes it.
exec /bin/bash -c ""
hadoop_shell_errorcode=$?
if [ $hadoop_shell_errorcode -ne 0 ]
then
  exit $hadoop_shell_errorcode
fi
{noformat}

Error

/tmp/unit_test_fail.sh: line 6: syntax error near unexpected token 
`cd_education_status'
/tmp/unit_test_fail.sh: line 6: `partition (cd_education_status)'

Please note that the "insert table" command being an invalid command is not 
even thrown.

Given the above issues, I was exploring other ways of achieving redirection 
without doing it per line. Using exec with redirection seems like a more 
concise way to achieve this - http://tldp.org/LDP/abs/html/x17974.html
I also tested the above UT script with exec and it works fine. If you dont have 
any objections, will update the patch to use exec with redirection instead.

{noformat}
export STDOUT="/tmp/1.out"
export STDERR="/tmp/1.err"

exec 1> >(tee -a "${STDOUT}" >&1) 2> >(tee -a "${STDERR}" >&2)
export 
APPLICATION_WORKFLOW_CONTEXT="{"workflowId":"609f91c5cd83","workflowName":"

insert table
partition (cd_education_status)
select cd_demo_sk, cd_gender, "
exec /bin/bash -c ""
hadoop_shell_errorcode=$?
if [ $hadoop_shell_errorcode -ne 0 ]
then
  exit $hadoop_shell_errorcode
fi
{noformat}

Error
==
*/tmp/unit_test_exec.sh: line 9: insert: command not found*
/tmp/unit_test_exec.sh: line 10: syntax error near unexpected token 
`cd_education_status'
/tmp/unit_test_exec.sh: line 10: `partition (cd_education_status)'

> Capture launch_container.sh logs
> 
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6917) Queue path is recomputed from scratch on every allocation

2017-08-09 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120417#comment-16120417
 ] 

Jason Lowe commented on YARN-6917:
--

Thanks for the patch, Eric!  I agree with the checkstyle nit: the new queuePath 
field should be private.  Other than that I think it looks good.  Agree that 
this is an optimization and the testing should be covered by existing 
reconfiguration tests.

> Queue path is recomputed from scratch on every allocation
> -
>
> Key: YARN-6917
> URL: https://issues.apache.org/jira/browse/YARN-6917
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Affects Versions: 2.8.1
>Reporter: Jason Lowe
>Assignee: Eric Payne
>Priority: Minor
> Attachments: YARN-6917.001.patch
>
>
> As part of the discussion in YARN-6901 I noticed that we are recomputing a 
> queue's path for every allocation.  Currently getting the queue's path 
> involves calling getQueuePath on the parent then building onto that string 
> with the basename of the queue.  In turn the parent's getQueuePath method 
> does the same, so we end up spending time recomputing a string that will 
> never change until a reconfiguration.
> Ideally the queue path should be computed once during queue initialization 
> rather than on-demand.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6033) Add support for sections in container-executor configuration file

2017-08-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120407#comment-16120407
 ] 

Hudson commented on YARN-6033:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12155 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12155/])
YARN-6033. Add support for sections in container-executor configuration 
(wangda: rev ec694145cf9c0ade7606813871ca2a4a371def8e)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/resources/test/test-configurations/old-config.cfg
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_main.cc
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.h
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/resources/test/test-configurations/configuration-1.cfg
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_util.cc
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.c
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.h
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_configuration.cc
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/resources/test/test-configurations/configuration-2.cfg


> Add support for sections in container-executor configuration file
> -
>
> Key: YARN-6033
> URL: https://issues.apache.org/jira/browse/YARN-6033
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6033.003.patch, YARN-6033.004.patch, 
> YARN-6033.005.patch, YARN-6033.006.patch, YARN-6033.007.patch, 
> YARN-6033.008.patch, YARN-6033.009.patch, YARN-6033.010.patch, 
> YARN-6033.011.patch, YARN-6033.012.patch, YARN-6033.013.patch, 
> YARN-6033.014.patch, YARN-6033-YARN-5673.001.patch, 
> YARN-6033-YARN-5673.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-09 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120399#comment-16120399
 ] 

Vrushali C commented on YARN-6820:
--

Thanks [~jlowe] and [~rohithsharma] for the reviews. Will upload an updated 
patch today.

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6033) Add support for sections in container-executor configuration file

2017-08-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6033:
-
Fix Version/s: 3.0.0-beta1

> Add support for sections in container-executor configuration file
> -
>
> Key: YARN-6033
> URL: https://issues.apache.org/jira/browse/YARN-6033
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6033.003.patch, YARN-6033.004.patch, 
> YARN-6033.005.patch, YARN-6033.006.patch, YARN-6033.007.patch, 
> YARN-6033.008.patch, YARN-6033.009.patch, YARN-6033.010.patch, 
> YARN-6033.011.patch, YARN-6033.012.patch, YARN-6033.013.patch, 
> YARN-6033.014.patch, YARN-6033-YARN-5673.001.patch, 
> YARN-6033-YARN-5673.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6033) Add support for sections in container-executor configuration file

2017-08-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120363#comment-16120363
 ] 

Wangda Tan commented on YARN-6033:
--

Committed to trunk, thanks [~vvasudev] and thanks reviews from 
[~miklos.szeg...@cloudera.com]/[~sunilg]! 

Backport to branch-2 blocked by YARN-6726.

> Add support for sections in container-executor configuration file
> -
>
> Key: YARN-6033
> URL: https://issues.apache.org/jira/browse/YARN-6033
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6033.003.patch, YARN-6033.004.patch, 
> YARN-6033.005.patch, YARN-6033.006.patch, YARN-6033.007.patch, 
> YARN-6033.008.patch, YARN-6033.009.patch, YARN-6033.010.patch, 
> YARN-6033.011.patch, YARN-6033.012.patch, YARN-6033.013.patch, 
> YARN-6033.014.patch, YARN-6033-YARN-5673.001.patch, 
> YARN-6033-YARN-5673.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6935) ResourceProfilesManagerImpl.parseResource() has no need of the key parameter

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120347#comment-16120347
 ] 

Hadoop QA commented on YARN-6935:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-3926 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 1s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} YARN-3926 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 3 unchanged - 0 fixed = 5 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 47s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6935 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881044/YARN-6935-YARN-3926.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3503048237d6 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3926 / 1b586d7 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16805/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16805/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16805/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-6323) Rolling upgrade/config change is broken on timeline v2.

2017-08-09 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120346#comment-16120346
 ] 

Varun Saxena commented on YARN-6323:


bq. this is very hard to enforce it from RM. RM can't differentiate between 
recovered apps and newly submitted apps. 
Yeah, we will have to write code to ensure this happens i.e. store a flag in 
state store (non-existence of which indicates data being written to v1). Just 
wanted to point out another possibility if we wanted to ensure incomplete app 
data does not exist. 
However, as I said this approach has the drawback that we may lose data from v1 
if user decides to not take up v2 and its unlikely to be a user scenario too, 
so I do not suggest following this approach.

> Rolling upgrade/config change is broken on timeline v2. 
> 
>
> Key: YARN-6323
> URL: https://issues.apache.org/jira/browse/YARN-6323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6323.001.patch
>
>
> Found this issue when deploying on real clusters. If there are apps running 
> when we enable timeline v2 (with work preserving restart enabled), node 
> managers will fail to start due to missing app context data. We should 
> probably assign some default names to these "left over" apps. I believe it's 
> suboptimal to let users clean up the whole cluster before enabling timeline 
> v2. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120342#comment-16120342
 ] 

Hadoop QA commented on YARN-6610:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-3926 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
48s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} YARN-3926 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common 
generated 0 new + 4569 unchanged - 5 fixed = 4569 total (was 4574) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
21s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6610 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881048/YARN-6610.YARN-3926.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e4258d200d34 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3926 / 1b586d7 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16807/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16807/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: 

[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-09 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120278#comment-16120278
 ] 

Jason Lowe commented on YARN-6820:
--

bq. DEFAULT_TIMELINE_SERVICE_READ_ALLOWED_USERS should be star *. It is empty 
now.

I do not agree.  The whole point of this JIRA is to block all users from seeing 
the data in the ATS.  The feature already has a master enable that defaults to 
off, so by default all users can read the data.  If a user bothers to flip the 
master enable to on, it should not have zero effect by default.  IMHO once the 
master enable is turned on, it should only allow the configured YARN admins to 
read the data by default, and the config needs to be explicitly updated to 
allow any other users to read.  Therefore I believe an empty value for this 
default is correct.

Speaking of which, the following code can NPE:
{code}
  String adminAclListStr =
  conf.getInitParameter(YarnConfiguration.YARN_ADMIN_ACL);
  if (StringUtils.isEmpty(adminAclListStr)) {
adminAclList = new AccessControlList(
YarnConfiguration.DEFAULT_TIMELINE_SERVICE_READ_ALLOWED_USERS);
LOG.info("adminAclList not set, hence setting it to "
+ " YarnConfiguration.DEFAULT_TIMELINE_SERVICE_READ_ALLOWED_USERS");
  }
  adminAclList = new AccessControlList(adminAclListStr);
{code}
because adminAclListStr is always passed to AccessControlList and could be 
null.  It also doesn't make sense to log a message that references code symbols 
for property values since users won't be familiar with those.  We also 
shouldn't assume that the whitelist reader default makes a good admin default.  
Even if it wasn't empty, we shouldn't assume the default reader list should be 
a default admin list.  Therefore I think it should be simplified to something 
like:
{code}
  String adminAclListStr =
  conf.getInitParameter(YarnConfiguration.YARN_ADMIN_ACL);
  if (StringUtils.isEmpty(adminAclListStr)) {
adminAclListStr = "";
  }
  adminAclList = new AccessControlList(adminAclListStr);
{code}

Same comment applies to the code where we initialize the filter config.  We 
should explicitly set it to "" (or a static final String property specific to 
this filter that has that value) rather than assume default read allowed makes 
a good default admin value.

bq.  HttpServeletRequest#remoteuser will be always null when we access from 
browsers. I doubt that in normal browser, we get always AuthorizationException. 
How ever it is expected if user is not authenticated. But my doubt is should we 
get user from principle name?

RMWebServices gets the user name from the principal, and I think we would need 
to do the same here.

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-08-09 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-6610:
---
Attachment: YARN-6610.YARN-3926.003.patch

Realized that I had the sorts backwards.  New patch attached.

> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-6610.001.patch, YARN-6610.YARN-3926.002.patch, 
> YARN-6610.YARN-3926.003.patch
>
>
> The {{dominant}} param assumes there are only two resources, i.e. true means 
> to compare the dominant, and false means to compare the subordinate.  Now 
> that there are _n_ resources, this parameter no longer makes sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6971) Clean up different ways to create resources

2017-08-09 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu reassigned YARN-6971:
--

Assignee: Yufei Gu

> Clean up different ways to create resources
> ---
>
> Key: YARN-6971
> URL: https://issues.apache.org/jira/browse/YARN-6971
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Minor
>  Labels: newbie
>
> There are several ways to create a {{resource}} object, e.g., 
> BuilderUtils.newResource() and Resources.createResource(). These methods not 
> only cause confusing but also performance issues, for example 
> BuilderUtils.newResource() is significant slow than 
> Resources.createResource(). 
> We could merge them some how, and replace most BuilderUtils.newResource() 
> with Resources.createResource().



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6969) Remove method getMinShareMemoryFraction and getPendingContainers in class FairSchedulerQueueInfo

2017-08-09 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120266#comment-16120266
 ] 

Yufei Gu commented on YARN-6969:


Sure. Feel free to take it. Seems like you aren't a contributor. [~rkanter], 
can you add [~LarryLo] as a contributor? Thanks. 
[~LarryLo], once [~rkanter] added you as a contributor, you can assign it to 
yourself.

> Remove method getMinShareMemoryFraction and getPendingContainers in class 
> FairSchedulerQueueInfo
> 
>
> Key: YARN-6969
> URL: https://issues.apache.org/jira/browse/YARN-6969
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: fairscheduler
>Reporter: Yufei Gu
>Priority: Trivial
>  Labels: newbie++
>
> They are not used anymore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5146) [YARN-3368] Supports Fair Scheduler in new YARN UI

2017-08-09 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120265#comment-16120265
 ] 

Sunil G commented on YARN-5146:
---

Looks fine. I will commit later today if there are no objections.

> [YARN-3368] Supports Fair Scheduler in new YARN UI
> --
>
> Key: YARN-5146
> URL: https://issues.apache.org/jira/browse/YARN-5146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Abdullah Yousufi
> Attachments: YARN-5146.001.patch, YARN-5146.002.patch, 
> YARN-5146.003.patch, YARN-5146.004.patch
>
>
> Current implementation in branch YARN-3368 only support capacity scheduler,  
> we want to make it support fair scheduler. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6323) Rolling upgrade/config change is broken on timeline v2.

2017-08-09 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120223#comment-16120223
 ] 

Rohith Sharma K S commented on YARN-6323:
-

bq. we can possibly write entities for running apps only to v1 and from new 
apps to v2 so we do not get incomplete app data for some apps from both v1 and 
v2.
this is very hard to enforce it from RM. RM can't differentiate between 
recovered apps and newly submitted apps. RM can write into time line server in 
non exclusive mode for some time period. 

> Rolling upgrade/config change is broken on timeline v2. 
> 
>
> Key: YARN-6323
> URL: https://issues.apache.org/jira/browse/YARN-6323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6323.001.patch
>
>
> Found this issue when deploying on real clusters. If there are apps running 
> when we enable timeline v2 (with work preserving restart enabled), node 
> managers will fail to start due to missing app context data. We should 
> probably assign some default names to these "left over" apps. I believe it's 
> suboptimal to let users clean up the whole cluster before enabling timeline 
> v2. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6903) Yarn-native-service framework core rewrite

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120221#comment-16120221
 ] 

Hadoop QA commented on YARN-6903:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 60 new or modified test 
files. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
32s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
19s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
43s{color} | {color:green} yarn-native-services passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
12s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
31s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  6m 50s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 3 new + 133 unchanged - 
5 fixed = 136 total (was 138) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 36s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 517 new + 1521 unchanged - 403 fixed = 2038 total (was 1924) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
25s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 26 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
17s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
44s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-slider in the patch failed. 

[jira] [Commented] (YARN-6935) ResourceProfilesManagerImpl.parseResource() has no need of the key parameter

2017-08-09 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120209#comment-16120209
 ] 

Manikandan R commented on YARN-6935:


Attached patch for review.

> ResourceProfilesManagerImpl.parseResource() has no need of the key parameter
> 
>
> Key: YARN-6935
> URL: https://issues.apache.org/jira/browse/YARN-6935
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
>  Labels: newbie
> Attachments: YARN-6935-YARN-3926.001.patch
>
>
> The {{key}} parameter is the name of the resource profile being parsed, which 
> is irrelevant to parsing the {{value}} as a {{Resource}} and hence is unused. 
>  It should be removed, and {{value}} should be renamed to something more 
> descriptive.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6935) ResourceProfilesManagerImpl.parseResource() has no need of the key parameter

2017-08-09 Thread Manikandan R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-6935:
---
Attachment: YARN-6935-YARN-3926.001.patch

> ResourceProfilesManagerImpl.parseResource() has no need of the key parameter
> 
>
> Key: YARN-6935
> URL: https://issues.apache.org/jira/browse/YARN-6935
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
>  Labels: newbie
> Attachments: YARN-6935-YARN-3926.001.patch
>
>
> The {{key}} parameter is the name of the resource profile being parsed, which 
> is irrelevant to parsing the {{value}} as a {{Resource}} and hence is unused. 
>  It should be removed, and {{value}} should be renamed to something more 
> descriptive.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6935) ResourceProfilesManagerImpl.parseResource() has no need of the key parameter

2017-08-09 Thread Manikandan R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R reassigned YARN-6935:
--

Assignee: Manikandan R

> ResourceProfilesManagerImpl.parseResource() has no need of the key parameter
> 
>
> Key: YARN-6935
> URL: https://issues.apache.org/jira/browse/YARN-6935
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
>  Labels: newbie
> Attachments: YARN-6935-YARN-3926.001.patch
>
>
> The {{key}} parameter is the name of the resource profile being parsed, which 
> is irrelevant to parsing the {{value}} as a {{Resource}} and hence is unused. 
>  It should be removed, and {{value}} should be renamed to something more 
> descriptive.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6953) Clean up ResourceUtils.setMinimumAllocationForMandatoryResources() and setMaximumAllocationForMandatoryResources()

2017-08-09 Thread Manikandan R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-6953:
---
Attachment: YARN-6953-YARN-3926-WIP.patch

> Clean up ResourceUtils.setMinimumAllocationForMandatoryResources() and 
> setMaximumAllocationForMandatoryResources()
> --
>
> Key: YARN-6953
> URL: https://issues.apache.org/jira/browse/YARN-6953
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6953-YARN-3926.001.patch, 
> YARN-6953-YARN-3926.002.patch, YARN-6953-YARN-3926.003.patch, 
> YARN-6953-YARN-3926.004.patch, YARN-6953-YARN-3926-WIP.patch
>
>
> The {{setMinimumAllocationForMandatoryResources()}} and 
> {{setMaximumAllocationForMandatoryResources()}} methods are quite convoluted. 
>  They'd be much simpler if they just handled CPU and memory manually instead 
> of trying to be clever about doing it in a loop.  There are also issues, such 
> as the log warning always talking about memory or the last element of the 
> inner array being a copy of the first element.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6953) Clean up ResourceUtils.setMinimumAllocationForMandatoryResources() and setMaximumAllocationForMandatoryResources()

2017-08-09 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120200#comment-16120200
 ] 

Manikandan R commented on YARN-6953:


[~sunilg] Attached WIP patch to make sure changes are in line with our 
discussion. Please review.

> Clean up ResourceUtils.setMinimumAllocationForMandatoryResources() and 
> setMaximumAllocationForMandatoryResources()
> --
>
> Key: YARN-6953
> URL: https://issues.apache.org/jira/browse/YARN-6953
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6953-YARN-3926.001.patch, 
> YARN-6953-YARN-3926.002.patch, YARN-6953-YARN-3926.003.patch, 
> YARN-6953-YARN-3926.004.patch
>
>
> The {{setMinimumAllocationForMandatoryResources()}} and 
> {{setMaximumAllocationForMandatoryResources()}} methods are quite convoluted. 
>  They'd be much simpler if they just handled CPU and memory manually instead 
> of trying to be clever about doing it in a loop.  There are also issues, such 
> as the log warning always talking about memory or the last element of the 
> inner array being a copy of the first element.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6892) Improve API implementation in Resources and DominantResourceCalculator in align to ResourceInformation

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120173#comment-16120173
 ] 

Hadoop QA commented on YARN-6892:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-3926 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
38s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
0s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} YARN-3926 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 15 unchanged - 1 fixed = 16 total (was 16) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
28s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6892 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881034/YARN-6892-YARN-3926.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f9abe88ea8c4 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3926 / 1b586d7 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16804/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/16804/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
|  Test Results | 

[jira] [Updated] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-08-09 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-6610:
---
Attachment: YARN-6610.YARN-3926.002.patch

Now that YARN-6788 is in, here's a fresh patch that is significantly better 
optimized.

> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-6610.001.patch, YARN-6610.YARN-3926.002.patch
>
>
> The {{dominant}} param assumes there are only two resources, i.e. true means 
> to compare the dominant, and false means to compare the subordinate.  Now 
> that there are _n_ resources, this parameter no longer makes sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6976) Some containers take a long time in KILLING state after the application is finished.

2017-08-09 Thread Aidi Pi (JIRA)
Aidi Pi created YARN-6976:
-

 Summary: Some containers take a long time in KILLING state after 
the application is finished.
 Key: YARN-6976
 URL: https://issues.apache.org/jira/browse/YARN-6976
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager, resourcemanager
Affects Versions: 2.7.3
 Environment: OS: Ubuntu 16.04, Java: JDK1.8, Docker: 
seqenceid/hadoop-2.4.0
Reporter: Aidi Pi


I use Docker as the container of YARN and ran Spark applications. In some runs, 
the resource manager log indicates that the application is done. However, some 
nodemanager logs indicates that the containers on this node are still in 
RUNNING state then enter KILLING state. They spend a long time (about 20s) in 
KILLING state before terminated.

In this case, 3 containers were still running after the app entered FINISHED 
state.
Below is the tail of RM and NM logs:

{panel:title=RM log}
2017-08-08 15:11:34,009 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: 
application_1502226348464_0002 State change from FINISHING to FINISHED
{panel}


{panel:title=NM log}
2017-08-08 15:11:51,277 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Container container_1502226348464_0002_01_03 transitioned from KILLING to 
EXITED_WITH_SUCCESS
2017-08-08 15:11:51,277 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Container container_1502226348464_0002_01_10 transitioned from KILLING to 
EXITED_WITH_SUCCESS
2017-08-08 15:11:51,277 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Container container_1502226348464_0002_01_16 transitioned from KILLING to 
EXITED_WITH_FAILURE
2017-08-08 15:11:51,309 INFO 
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=eddie
OPERATION=Container Finished - SucceededTARGET=ContainerImpl
RESULT=SUCCESS  APPID=application_1502226348464_0002
CONTAINERID=container_1502226348464_0002_01_03
2017-08-08 15:11:51,351 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Container container_1502226348464_0002_01_03 transitioned from 
EXITED_WITH_SUCCESS to DONE
2017-08-08 15:11:51,351 INFO 
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=eddie
OPERATION=Container Finished - SucceededTARGET=ContainerImpl
RESULT=SUCCESS  APPID=application_1502226348464_0002
CONTAINERID=container_1502226348464_0002_01_10
2017-08-08 15:11:51,351 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Container container_1502226348464_0002_01_10 transitioned from 
EXITED_WITH_SUCCESS to DONE
2017-08-08 15:11:51,357 WARN 
org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=eddie
OPERATION=Container Finished - Failed   TARGET=ContainerImplRESULT=FAILURE  
DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE
APPID=application_1502226348464_0002
CONTAINERID=container_1502226348464_0002_01_16
2017-08-08 15:11:51,357 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl:
 Container container_1502226348464_0002_01_16 transitioned from 
EXITED_WITH_FAILURE to DONE
{panel}








--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6885) AllocationFileLoaderService.loadQueue() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120150#comment-16120150
 ] 

Hadoop QA commented on YARN-6885:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 186 new + 16 unchanged - 2 fixed = 202 total (was 18) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
6s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 35s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Impossible cast from Double to Float in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.loadQueue(String,
 Element, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, 
Set, Set)  At AllocationFileLoaderService.java:Float in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.loadQueue(String,
 Element, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, 
Set, Set)  At AllocationFileLoaderService.java:[line 544] |
|  |  Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.loadQueue(String,
 Element, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, 
Set, Set)  At AllocationFileLoaderService.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.loadQueue(String,
 Element, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, Map, 
Set, Set)  At AllocationFileLoaderService.java:[line 560] |
| Failed junit tests | 

[jira] [Commented] (YARN-6736) Consider writing to both ats v1 & v2 from RM for smoother upgrades

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120088#comment-16120088
 ] 

Hadoop QA commented on YARN-6736:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
53s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
47s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} YARN-5355 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
5s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in YARN-5355 has 8 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 55s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 4 new + 249 unchanged - 0 fixed = 253 total (was 249) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 19s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestResourceTrackerService |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.TestRMHATimelineCollectors |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6736 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881018/YARN-6736-YARN-5355.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ec55c66db527 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 3088cfc |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Commented] (YARN-6958) Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice

2017-08-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120069#comment-16120069
 ] 

Hudson commented on YARN-6958:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12154 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12154/])
YARN-6958. Moving logging APIs over to slf4j in (aajisaka: rev 
63cfcb90ac6fbb79ba9ed6b3044cd999fc74e58c)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollector.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineReaderServer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/FileSystemTimelineReaderImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/PerNodeTimelineCollectorsAuxService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineReaderWebServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/application/ApplicationTable.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/apptoflow/AppToFlowTable.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowRunCoprocessor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowScanner.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/TimelineSchemaCreator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowRunTable.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowActivityTable.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/entity/EntityTable.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorWebService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/LevelDBCacheTimelineStore.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineReaderImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/filter/TimelineFilterUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/HBaseTimelineStorageUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/TimelineStorageUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/NodeTimelineCollectorManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/common/ColumnHelper.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/AppLevelTimelineCollector.java
* (edit) 

[jira] [Updated] (YARN-6892) Improve API implementation in Resources and DominantResourceCalculator in align to ResourceInformation

2017-08-09 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6892:
--
Attachment: YARN-6892-YARN-3926.002.patch

Updating patch after addressing the comments. Thanks [~leftnoteasy]

> Improve API implementation in Resources and DominantResourceCalculator in 
> align to ResourceInformation
> --
>
> Key: YARN-6892
> URL: https://issues.apache.org/jira/browse/YARN-6892
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-6892-YARN-3926.001.patch, 
> YARN-6892-YARN-3926.002.patch
>
>
> In YARN-3926, apis in Resources and DRC spents significant cpu cycles in most 
> of its api. For better performance, its better to improve the apis as 
> resource types order is defined in system level (ResourceUtils class ensures 
> this post YARN-6788)
> This work is preceding to YARN-6788



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6958) Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice

2017-08-09 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120046#comment-16120046
 ] 

Akira Ajisaka commented on YARN-6958:
-

Committed this to trunk. Hi [~Cyl], would you provide a patch for branch-2?

> Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice
> ---
>
> Key: YARN-6958
> URL: https://issues.apache.org/jira/browse/YARN-6958
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6958.001.patch, YARN-6958.002.patch
>
>
> This jira moves logging APIS over to slf4j in the following modules:
> {code}
>  hadoop-yarn-server-timeline-pluginstorage 
> hadoop-yarn-server-timelineservice
> hadoop-yarn-server-timelineservice-hbase
> hadoop-yarn-server-timelineservice-hbase-tests 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6958) Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice

2017-08-09 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-6958:

Target Version/s: 2.9.0, 3.0.0-beta1
   Fix Version/s: 3.0.0-beta1

> Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice
> ---
>
> Key: YARN-6958
> URL: https://issues.apache.org/jira/browse/YARN-6958
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6958.001.patch, YARN-6958.002.patch
>
>
> This jira moves logging APIS over to slf4j in the following modules:
> {code}
>  hadoop-yarn-server-timeline-pluginstorage 
> hadoop-yarn-server-timelineservice
> hadoop-yarn-server-timelineservice-hbase
> hadoop-yarn-server-timelineservice-hbase-tests 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6958) Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice

2017-08-09 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120042#comment-16120042
 ] 

Akira Ajisaka commented on YARN-6958:
-

LGTM, +1

> Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice
> ---
>
> Key: YARN-6958
> URL: https://issues.apache.org/jira/browse/YARN-6958
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Attachments: YARN-6958.001.patch, YARN-6958.002.patch
>
>
> This jira moves logging APIS over to slf4j in the following modules:
> {code}
>  hadoop-yarn-server-timeline-pluginstorage 
> hadoop-yarn-server-timelineservice
> hadoop-yarn-server-timelineservice-hbase
> hadoop-yarn-server-timelineservice-hbase-tests 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >