[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-10 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121737#comment-16121737
 ] 

Jason Lowe commented on YARN-6820:
--

Thanks for updating the patch!

I do wonder why there's a way to get the user via the principal and without.  I 
noticed that for RMWebServices despite the fact that 
getCallerUserGroupInformation takes a boolean argument on whether to use the 
principal or not, all of the callers always request the principal.  Wouldn't we 
want the same behavior for the ATS v2 web services?  I don't see why we would 
sometimes want to get the user via the principal and sometimes not.  If we 
should always get the principal just like RMWebServices does then we can 
address it in a separate JIRA if desired.

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch, 
> YARN-6820-YARN-5355.004.patch, YARN-6820-YARN-5355.005.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5978) ContainerScheduler and Container state machine changes to support ExecType update

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121754#comment-16121754
 ] 

Hadoop QA commented on YARN-5978:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
49s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  0s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 11 new + 403 unchanged - 4 fixed = 414 total (was 407) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 20s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m  4s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m  1s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.TestContainerManagerRecovery |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.client.api.impl.TestAMRMClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-5978 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881201/YARN-5978.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  

[jira] [Updated] (YARN-6885) AllocationFileLoaderService.loadQueue() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-10 Thread Yu-Tang Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu-Tang Lin updated YARN-6885:
--
Attachment: (was: YARN-6885.003.patch)

> AllocationFileLoaderService.loadQueue() should use a switch statement in the 
> main tag parsing loop instead of the if/else-if/...
> 
>
> Key: YARN-6885
> URL: https://issues.apache.org/jira/browse/YARN-6885
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6885.004.patch
>
>
> {code}  if ("minResources".equals(field.getTagName())) {
> String text = ((Text)field.getFirstChild()).getData().trim();
> Resource val =
> FairSchedulerConfiguration.parseResourceConfigValue(text);
> minQueueResources.put(queueName, val);
>   } else if ("maxResources".equals(field.getTagName())) {
>   ...{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6983) [YARN-3368] Highlight items on hover in Cluster Overview

2017-08-10 Thread JIRA
Gergely Novák created YARN-6983:
---

 Summary: [YARN-3368] Highlight items on hover in Cluster Overview
 Key: YARN-6983
 URL: https://issues.apache.org/jira/browse/YARN-6983
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: yarn-ui-v2
Reporter: Gergely Novák






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6984) DominantResourceCalculator.isAnyMajorResourceZero() should test all resources

2017-08-10 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-6984:
--

 Summary: DominantResourceCalculator.isAnyMajorResourceZero() 
should test all resources
 Key: YARN-6984
 URL: https://issues.apache.org/jira/browse/YARN-6984
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: scheduler
Affects Versions: YARN-3926
Reporter: Daniel Templeton


The method currently tests only memory and CPU.  It looks to me like it should 
test all resources, i.e. it should do what {{isInvalidDivisor()}} does and 
should, in fact, replace that method.  [~sunilg], since you wrote the method 
originally, can you comment on what its intended semantics are?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6985) The wrapper methods in Resources aren't useful

2017-08-10 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-6985:
--

 Summary: The wrapper methods in Resources aren't useful
 Key: YARN-6985
 URL: https://issues.apache.org/jira/browse/YARN-6985
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 3.0.0-alpha4
Reporter: Daniel Templeton


The code would be shorter, easier to read, and a tiny smidgeon faster if we 
just called the {{ResourceCalculator}} methods directly.  I don't see where the 
wrappers improve the code in any way.

For example, with wrappers:{code}Resource normalized = Resources.normalize(
resourceCalculator, ask, minimumResource,
maximumResource, incrementResource);
{code} and without wrappers:{code}Resource normalized = 
resourceCalculator.normalize(ask, minimumResource,
maximumResource, incrementResource);{code}

The difference isn't huge, but I find the latter much more readable.  With the 
former I always have to figure out which parameters are which, because passing 
in the {{ResourceCalculator}} adds in an unrelated additional parameter at the 
head of the list.

There may be some cases where the wrapper methods are mixed in with calls to 
legitimate {{Resources}} methods, making the code more consistent to use the 
wrappers. In those cases, that may be a reason to keep and use the wrapper 
method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-65) Reduce RM app memory footprint once app has completed

2017-08-10 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-65?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122872#comment-16122872
 ] 

Bibin A Chundatt commented on YARN-65:
--

{quote}
I think we can improve it setting AMContainerSpec to null rather than setting 
individual fields.
{quote}
We will loose ApplicationACL  in that case. 
{{RMAppManager#createAndPopulateNewRMApp}}
{code}
   // Inform the ACLs Manager
this.applicationACLsManager.addApplication(applicationId,
submissionContext.getAMContainerSpec().getApplicationACLs());
{code}
Any idea why the applicationACL was set to AM container Launch Context??

> Reduce RM app memory footprint once app has completed
> -
>
> Key: YARN-65
> URL: https://issues.apache.org/jira/browse/YARN-65
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 0.23.3
>Reporter: Jason Lowe
>Assignee: Manikandan R
> Attachments: YARN-65.001.patch, YARN-65.002.patch, YARN-65.003.patch, 
> YARN-65.004.patch, YARN-65.005.patch, YARN-65.006.patch, YARN-65.007.patch
>
>
> The ResourceManager holds onto a configurable number of completed 
> applications (yarn.resource.max-completed-applications, defaults to 1), 
> and the memory footprint of these completed applications can be significant.  
> For example, the {{submissionContext}} in RMAppImpl contains references to 
> protocolbuffer objects and other items that probably aren't necessary to keep 
> around once the application has completed.  We could significantly reduce the 
> memory footprint of the RM by releasing objects that are no longer necessary 
> once an application completes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6741) Deleting all children of a Parent Queue on refresh throws exception

2017-08-10 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122885#comment-16122885
 ] 

Bibin A Chundatt commented on YARN-6741:


[~naganarasimha...@apache.org]
Could you recheck failed testcases.

> Deleting all children of a Parent Queue on refresh throws exception
> ---
>
> Key: YARN-6741
> URL: https://issues.apache.org/jira/browse/YARN-6741
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Affects Versions: 3.0.0-alpha3
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6741.001.patch, YARN-6741.002.patch, 
> YARN-6741.003.patch, YARN-6741.004.patch, YARN-6741.005.patch
>
>
> If we configure CS such that all  children of a parent queue are deleted and 
> made as a leaf queue, then {{refreshQueue}} operation fails when 
> re-initializing the parent Queue
> {code}
>// Sanity check
>   if (!(newlyParsedQueue instanceof ParentQueue) || !newlyParsedQueue
>   .getQueuePath().equals(getQueuePath())) {
> throw new IOException(
> "Trying to reinitialize " + getQueuePath() + " from "
> + newlyParsedQueue.getQueuePath());
>   }
> {code}
> *Expected Behavior:*
> Converting a Parent Queue to leafQueue on refreshQueue needs to be supported.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-10 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121170#comment-16121170
 ] 

Jian He commented on YARN-6959:
---

bq. we should better rename it to getCurrentApplicationAttemp
Yep, would you like to rename it in this patch ?

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6883) AllocationFileLoaderService.reloadAllocations() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-10 Thread Larry Lo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121176#comment-16121176
 ] 

Larry Lo commented on YARN-6883:


Thanks to Daniel for filling this form, I would like to take this task, thanks!

> AllocationFileLoaderService.reloadAllocations() should use a switch statement 
> in the main tag parsing loop instead of the if/else-if/...
> 
>
> Key: YARN-6883
> URL: https://issues.apache.org/jira/browse/YARN-6883
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Priority: Minor
>  Labels: newbie
>
> {code}if ("queue".equals(element.getTagName()) ||
>   "pool".equals(element.getTagName())) {
>   queueElements.add(element);
> } else if ("user".equals(element.getTagName())) {
> ...{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-10 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121119#comment-16121119
 ] 

Jian He commented on YARN-6959:
---

Yes, I agree It is possible, but may happen rarely as NM and RM also has the 
heartbeat interval.  The fix is fine, just wondering if there are other issues 
behind this, otherwise, the fix will just hide other issues, if any.
Btw, did this happen in a real cluster ?  

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6631) Refactor loader.js in new Yarn UI

2017-08-10 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6631:
--
Summary: Refactor loader.js in new Yarn UI  (was: Refactor loader.js in new 
YARN-UI)

> Refactor loader.js in new Yarn UI
> -
>
> Key: YARN-6631
> URL: https://issues.apache.org/jira/browse/YARN-6631
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-6631.001.patch
>
>
> Current loader.js file overwrites all other ENV properties configured in 
> config.env file other than "rmWebAdderss" and "timelineWebAddress". This 
> ticket is meant to fix the above issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6631) Refactor loader.js in new YARN-UI

2017-08-10 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121129#comment-16121129
 ] 

Sunil G commented on YARN-6631:
---

Patch looks fine. Committing the same.

> Refactor loader.js in new YARN-UI
> -
>
> Key: YARN-6631
> URL: https://issues.apache.org/jira/browse/YARN-6631
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-6631.001.patch
>
>
> Current loader.js file overwrites all other ENV properties configured in 
> config.env file other than "rmWebAdderss" and "timelineWebAddress". This 
> ticket is meant to fix the above issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6885) AllocationFileLoaderService.loadQueue() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-10 Thread Yu-Tang Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu-Tang Lin updated YARN-6885:
--
Attachment: YARN-6885.003.patch

> AllocationFileLoaderService.loadQueue() should use a switch statement in the 
> main tag parsing loop instead of the if/else-if/...
> 
>
> Key: YARN-6885
> URL: https://issues.apache.org/jira/browse/YARN-6885
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4
>
>
> {code}  if ("minResources".equals(field.getTagName())) {
> String text = ((Text)field.getFirstChild()).getData().trim();
> Resource val =
> FairSchedulerConfiguration.parseResourceConfigValue(text);
> minQueueResources.put(queueName, val);
>   } else if ("maxResources".equals(field.getTagName())) {
>   ...{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6885) AllocationFileLoaderService.loadQueue() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-10 Thread Yu-Tang Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu-Tang Lin updated YARN-6885:
--
Attachment: (was: YARN-6885.002.patch)

> AllocationFileLoaderService.loadQueue() should use a switch statement in the 
> main tag parsing loop instead of the if/else-if/...
> 
>
> Key: YARN-6885
> URL: https://issues.apache.org/jira/browse/YARN-6885
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4
>
>
> {code}  if ("minResources".equals(field.getTagName())) {
> String text = ((Text)field.getFirstChild()).getData().trim();
> Resource val =
> FairSchedulerConfiguration.parseResourceConfigValue(text);
> minQueueResources.put(queueName, val);
>   } else if ("maxResources".equals(field.getTagName())) {
>   ...{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6885) AllocationFileLoaderService.loadQueue() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-10 Thread Yu-Tang Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu-Tang Lin updated YARN-6885:
--
Attachment: (was: 0001-YARN-6885.patch)

> AllocationFileLoaderService.loadQueue() should use a switch statement in the 
> main tag parsing loop instead of the if/else-if/...
> 
>
> Key: YARN-6885
> URL: https://issues.apache.org/jira/browse/YARN-6885
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4
>
>
> {code}  if ("minResources".equals(field.getTagName())) {
> String text = ((Text)field.getFirstChild()).getData().trim();
> Resource val =
> FairSchedulerConfiguration.parseResourceConfigValue(text);
> minQueueResources.put(queueName, val);
>   } else if ("maxResources".equals(field.getTagName())) {
>   ...{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6885) AllocationFileLoaderService.loadQueue() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-10 Thread Yu-Tang Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121154#comment-16121154
 ] 

Yu-Tang Lin edited comment on YARN-6885 at 8/10/17 6:45 AM:


patch updated!


was (Author: yu-tang lin):
minor refactor the old code, no new test case is needed.

> AllocationFileLoaderService.loadQueue() should use a switch statement in the 
> main tag parsing loop instead of the if/else-if/...
> 
>
> Key: YARN-6885
> URL: https://issues.apache.org/jira/browse/YARN-6885
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6885.003.patch
>
>
> {code}  if ("minResources".equals(field.getTagName())) {
> String text = ((Text)field.getFirstChild()).getData().trim();
> Resource val =
> FairSchedulerConfiguration.parseResourceConfigValue(text);
> minQueueResources.put(queueName, val);
>   } else if ("maxResources".equals(field.getTagName())) {
>   ...{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6133) [ATSv2 Security] Renew delegation token for app automatically if an app collector is active

2017-08-10 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121122#comment-16121122
 ] 

Varun Saxena commented on YARN-6133:


Thanks [~jianhe] and [~rohithsharma] for the review and commit.
Next patch to be reviewed is YARN-6134. :)
I will invoke QA for it shortly.

> [ATSv2 Security] Renew delegation token for app automatically if an app 
> collector is active
> ---
>
> Key: YARN-6133
> URL: https://issues.apache.org/jira/browse/YARN-6133
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6133-YARN-5355.01.patch, 
> YARN-6133-YARN-5355.02.patch, YARN-6133-YARN-5355.03.patch, 
> YARN-6133-YARN-5355.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6134) [ATSv2 Security] Regenerate delegation token for app just before token expires if app collector is active

2017-08-10 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-6134:
---
Attachment: YARN-6134-YARN-5355.02.patch

There is a conflict with the currently uploaded patch after YARN-6133 went in.
Updating a new patch

> [ATSv2 Security] Regenerate delegation token for app just before token 
> expires if app collector is active
> -
>
> Key: YARN-6134
> URL: https://issues.apache.org/jira/browse/YARN-6134
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6134-YARN-5355.01.patch, 
> YARN-6134-YARN-5355.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6631) Refactor loader.js in new Yarn UI

2017-08-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121159#comment-16121159
 ] 

Hudson commented on YARN-6631:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12158 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12158/])
YARN-6631. Refactor loader.js in new Yarn UI. Contributed by Akhil P B. 
(sunilg: rev 8d953c2359c5b12cf5b1f3c14be3ff5bb74242d0)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/initializers/loader.js


> Refactor loader.js in new Yarn UI
> -
>
> Key: YARN-6631
> URL: https://issues.apache.org/jira/browse/YARN-6631
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akhil PB
>Assignee: Akhil PB
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6631.001.patch
>
>
> Current loader.js file overwrites all other ENV properties configured in 
> config.env file other than "rmWebAdderss" and "timelineWebAddress". This 
> ticket is meant to fix the above issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-10 Thread Yuqi Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121146#comment-16121146
 ] 

Yuqi Wang commented on YARN-6959:
-

Yes, it is very rare. It is the first time I have seen in our large cluster.

The log was from our production cluster.
We have very larger cluster (>50k nodes) which serves daily batch jobs and long 
running services from our customer in Microsoft.

Our customer complains that their job just fail without any effective 
retry/attempts.
Because as the log showed, the AM container size decreased from 20GB to 5GB, so 
the new attempt will be definitively fail since pmem limitation is enabled.

As I said in this JIRA Description:
Concerns:
The getApplicationAttempt function in AbstractYarnScheduler is so confusing, we 
should better rename it to getCurrentApplicationAttempt. And reconsider whether 
there are any other bugs related to getApplicationAttempt.



> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6885) AllocationFileLoaderService.loadQueue() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-10 Thread Yu-Tang Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu-Tang Lin updated YARN-6885:
--
Attachment: YARN-6885.003.patch

minor refactor the old code, no new test case is needed.

> AllocationFileLoaderService.loadQueue() should use a switch statement in the 
> main tag parsing loop instead of the if/else-if/...
> 
>
> Key: YARN-6885
> URL: https://issues.apache.org/jira/browse/YARN-6885
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6885.003.patch
>
>
> {code}  if ("minResources".equals(field.getTagName())) {
> String text = ((Text)field.getFirstChild()).getData().trim();
> Resource val =
> FairSchedulerConfiguration.parseResourceConfigValue(text);
> minQueueResources.put(queueName, val);
>   } else if ("maxResources".equals(field.getTagName())) {
>   ...{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6885) AllocationFileLoaderService.loadQueue() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-10 Thread Yu-Tang Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu-Tang Lin updated YARN-6885:
--
Attachment: (was: YARN-6885.003.patch)

> AllocationFileLoaderService.loadQueue() should use a switch statement in the 
> main tag parsing loop instead of the if/else-if/...
> 
>
> Key: YARN-6885
> URL: https://issues.apache.org/jira/browse/YARN-6885
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4
>
>
> {code}  if ("minResources".equals(field.getTagName())) {
> String text = ((Text)field.getFirstChild()).getData().trim();
> Resource val =
> FairSchedulerConfiguration.parseResourceConfigValue(text);
> minQueueResources.put(queueName, val);
>   } else if ("maxResources".equals(field.getTagName())) {
>   ...{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-6885) AllocationFileLoaderService.loadQueue() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-10 Thread Yu-Tang Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu-Tang Lin updated YARN-6885:
--
Comment: was deleted

(was: minor refactor the old code, no new test case is needed.)

> AllocationFileLoaderService.loadQueue() should use a switch statement in the 
> main tag parsing loop instead of the if/else-if/...
> 
>
> Key: YARN-6885
> URL: https://issues.apache.org/jira/browse/YARN-6885
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6885.003.patch
>
>
> {code}  if ("minResources".equals(field.getTagName())) {
> String text = ((Text)field.getFirstChild()).getData().trim();
> Resource val =
> FairSchedulerConfiguration.parseResourceConfigValue(text);
> minQueueResources.put(queueName, val);
>   } else if ("maxResources".equals(field.getTagName())) {
>   ...{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6134) [ATSv2 Security] Regenerate delegation token for app just before token expires if app collector is active

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121165#comment-16121165
 ] 

Hadoop QA commented on YARN-6134:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 0s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
26s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 48s{color} 
| {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.TestMiniYarnClusterNodeUtilization |
|   | hadoop.yarn.server.TestContainerManagerSecurity |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6134 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881146/YARN-6134-YARN-5355.02.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ce375ae431e0 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 7b2cb06 |
| 

[jira] [Commented] (YARN-6882) AllocationFileLoaderService.reloadAllocations() should use the diamond operator

2017-08-10 Thread Larry Lo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121182#comment-16121182
 ] 

Larry Lo commented on YARN-6882:


Thanks to Daniel for filling this form, I would like to take this task, thanks!

> AllocationFileLoaderService.reloadAllocations() should use the diamond 
> operator
> ---
>
> Key: YARN-6882
> URL: https://issues.apache.org/jira/browse/YARN-6882
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Priority: Trivial
>  Labels: newbie
>
> Here:{code}for (FSQueueType queueType : FSQueueType.values()) {
>   configuredQueues.put(queueType, new HashSet());
> }{code} and here:{code}List queueElements = new 
> ArrayList();{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6134) [ATSv2 Security] Regenerate delegation token for app just before token expires if app collector is active

2017-08-10 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121190#comment-16121190
 ] 

Varun Saxena commented on YARN-6134:


Test failures are outstanding issues on trunk.
cc [~jianhe], [~rohithsharma]

> [ATSv2 Security] Regenerate delegation token for app just before token 
> expires if app collector is active
> -
>
> Key: YARN-6134
> URL: https://issues.apache.org/jira/browse/YARN-6134
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6134-YARN-5355.01.patch, 
> YARN-6134-YARN-5355.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-10 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121198#comment-16121198
 ] 

Jian He commented on YARN-6959:
---

bq. And if I just change getApplicationAttempt to getCurrentApplicationAttempt, 
it is more likely to hide the bugs.
Don't get you, it's just a rename refactor?  how will it add/hide bugs? 
Anyway, looks like a bunch of callers, better not do, as this will affect other 
activities going on.
Would you mind adding a comment on the getApplicationAttempt method to explain 
its behavior ?

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6958) Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice

2017-08-10 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121272#comment-16121272
 ] 

Yeliang Cang commented on YARN-6958:


Thanks for the review, [~ajisakaa]! A branch-2 patch is provided, please see it.

> Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice
> ---
>
> Key: YARN-6958
> URL: https://issues.apache.org/jira/browse/YARN-6958
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6958.001.patch, YARN-6958.002.patch, 
> YARN-6958-branch-2.001.patch
>
>
> This jira moves logging APIS over to slf4j in the following modules:
> {code}
>  hadoop-yarn-server-timeline-pluginstorage 
> hadoop-yarn-server-timelineservice
> hadoop-yarn-server-timelineservice-hbase
> hadoop-yarn-server-timelineservice-hbase-tests 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-10 Thread Yuqi Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121193#comment-16121193
 ] 

Yuqi Wang edited comment on YARN-6959 at 8/10/17 7:34 AM:
--

As this issue, other places which call getApplicationAttempt may also want to 
get the attempt specified in the arg instead of current attempt.
And if I just change getApplicationAttempt to getCurrentApplicationAttempt, it 
is more likely to hide the bugs.
I think only for this JIRA, I will not touch getApplicationAttempt until we 
have confirmed all places used getApplicationAttempt is safe.


was (Author: yqwang):
As this issue, other places which call getApplicationAttempt may also want to 
get the attempt specified in the arg instead of current attempt.
And if I just change getApplicationAttempt to getCurrentApplicationAttempt, it 
is more likely to hide the bugs.
I think only for this fix, I will not touch getApplicationAttempt until we have 
a confirmed all places used getApplicationAttempt is safe.

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-10 Thread Yuqi Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121193#comment-16121193
 ] 

Yuqi Wang commented on YARN-6959:
-

As this issue, other places which call getApplicationAttempt may also want to 
get the attempt specified in the arg instead of current attempt.
And if I just change getApplicationAttempt to getCurrentApplicationAttempt, it 
is more likely to hide the bugs.
I think only for this fix, I will not touch getApplicationAttempt until we have 
a confirmed all places used getApplicationAttempt is safe.

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-10 Thread Yuqi Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121193#comment-16121193
 ] 

Yuqi Wang edited comment on YARN-6959 at 8/10/17 7:34 AM:
--

As this issue, other places which call getApplicationAttempt may also want to 
get the attempt specified in the arg instead of current attempt.
And if I just change getApplicationAttempt to getCurrentApplicationAttempt, it 
is more likely to hide the bugs.
I think only for this JIRA, I will not touch getApplicationAttempt until we 
have confirmed all places used getApplicationAttempt is bugfree.


was (Author: yqwang):
As this issue, other places which call getApplicationAttempt may also want to 
get the attempt specified in the arg instead of current attempt.
And if I just change getApplicationAttempt to getCurrentApplicationAttempt, it 
is more likely to hide the bugs.
I think only for this JIRA, I will not touch getApplicationAttempt until we 
have confirmed all places used getApplicationAttempt is safe.

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-10 Thread Yuqi Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121202#comment-16121202
 ] 

Yuqi Wang edited comment on YARN-6959 at 8/10/17 7:44 AM:
--

I already added a comment on it in the patch:
// TODO: Rename it to getCurrentApplicationAttempt

I think it is clear. What do you think about it?


was (Author: yqwang):
I already add a comment on it:
// TODO: Rename it to getCurrentApplicationAttempt

I think it is clear. What do you think about it?

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-10 Thread Yuqi Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121202#comment-16121202
 ] 

Yuqi Wang commented on YARN-6959:
-

I already add a comment on it:
// TODO: Rename it to getCurrentApplicationAttempt

I think it is clear. What do you think about it?

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6323) Rolling upgrade/config change is broken on timeline v2.

2017-08-10 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121240#comment-16121240
 ] 

Rohith Sharma K S commented on YARN-6323:
-

Going back through whole discussion on this JIRA, continuing on creating a 
default flow context and publishing container entities are not at all useful 
unless RM and NM both creates same flow context. If we still go ahed with 
default flow context with RM has flowName as appName and NM has flowName as 
appId then both are written into separate rows. From user perspective, he can't 
able to retrieve container entities at all anytime unless he gives appId as 
flowName. Given this is fine for running applications during upgrade, creating 
default context make sense.

> Rolling upgrade/config change is broken on timeline v2. 
> 
>
> Key: YARN-6323
> URL: https://issues.apache.org/jira/browse/YARN-6323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6323.001.patch
>
>
> Found this issue when deploying on real clusters. If there are apps running 
> when we enable timeline v2 (with work preserving restart enabled), node 
> managers will fail to start due to missing app context data. We should 
> probably assign some default names to these "left over" apps. I believe it's 
> suboptimal to let users clean up the whole cluster before enabling timeline 
> v2. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6885) AllocationFileLoaderService.loadQueue() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-10 Thread Yu-Tang Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121246#comment-16121246
 ] 

Yu-Tang Lin commented on YARN-6885:
---

Due to this patch is a minor refactor of two functions in 
AllocationFileLoaderService, and no interface is changed, so the existed tests 
should cover this change.

> AllocationFileLoaderService.loadQueue() should use a switch statement in the 
> main tag parsing loop instead of the if/else-if/...
> 
>
> Key: YARN-6885
> URL: https://issues.apache.org/jira/browse/YARN-6885
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6885.003.patch
>
>
> {code}  if ("minResources".equals(field.getTagName())) {
> String text = ((Text)field.getFirstChild()).getData().trim();
> Resource val =
> FairSchedulerConfiguration.parseResourceConfigValue(text);
> minQueueResources.put(queueName, val);
>   } else if ("maxResources".equals(field.getTagName())) {
>   ...{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6958) Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121302#comment-16121302
 ] 

Hadoop QA commented on YARN-6958:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
51s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage-jdk1.8.0_131
 with JDK v1.8.0_131 generated 0 new + 0 unchanged - 3 fixed = 0 total (was 3) 
{color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage-jdk1.7.0_131
 with JDK v1.7.0_131 generated 0 new + 0 unchanged - 3 fixed = 0 total (was 3) 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
1s{color} | {color:green} hadoop-yarn-server-timeline-pluginstorage in the 
patch passed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | YARN-6958 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881167/YARN-6958-branch-2.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 19ae10607f8a 

[jira] [Commented] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-10 Thread Yuqi Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121196#comment-16121196
 ] 

Yuqi Wang commented on YARN-6959:
-

The renaming can be made in next hadoop version.

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6885) AllocationFileLoaderService.loadQueue() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121218#comment-16121218
 ] 

Hadoop QA commented on YARN-6885:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 197 new + 16 unchanged - 2 fixed = 213 total (was 18) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 44m 
39s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6885 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881152/YARN-6885.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f121c26d4a6d 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8d953c2 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16818/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16818/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16818/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> AllocationFileLoaderService.loadQueue() should use a switch statement in the 
> main 

[jira] [Updated] (YARN-6958) Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice

2017-08-10 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-6958:
---
Attachment: YARN-6958-branch-2.001.patch

> Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice
> ---
>
> Key: YARN-6958
> URL: https://issues.apache.org/jira/browse/YARN-6958
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6958.001.patch, YARN-6958.002.patch, 
> YARN-6958-branch-2.001.patch
>
>
> This jira moves logging APIS over to slf4j in the following modules:
> {code}
>  hadoop-yarn-server-timeline-pluginstorage 
> hadoop-yarn-server-timelineservice
> hadoop-yarn-server-timelineservice-hbase
> hadoop-yarn-server-timelineservice-hbase-tests 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-08-10 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-6130:
---
Attachment: YARN-6130-YARN-5355_branch2.02.patch

Updating patch after fixing javadoc and checkstyle issues

> [ATSv2 Security] Generate a delegation token for AM when app collector is 
> created and pass it to AM via NM and RM
> -
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6130-YARN-5355.01.patch, 
> YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch, 
> YARN-6130-YARN-5355.04.patch, YARN-6130-YARN-5355.05.patch, 
> YARN-6130-YARN-5355.06.patch, YARN-6130-YARN-5355_branch2.01.patch, 
> YARN-6130-YARN-5355_branch2.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6874) Supplement timestamp for min start/max end time columns in flow run table to avoid overwrite

2017-08-10 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-6874:
---
Description: 
Following test case is failing in YARN-5355 branch.
This is coming because we are not supplementing the timestamp for FlowRunColumn 
i.e. min_start_time and max_end_time columns, post YARN-6850 which can lead to 
a clash, if 2 writes for app created events happen at the same time, which is 
true for this test case.
To fix this, we need to pass true flag into ColumnHelper constructor. I did 
encounter this failure once earlier too.

{noformat}
testWriteFlowRunMinMax(org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun)
  Time elapsed: 0.088 sec  <<< FAILURE!
java.lang.AssertionError: expected:<142502690> but was:<1425026901000>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun.testWriteFlowRunMinMax(TestHBaseStorageFlowRun.java:237)
{noformat}

  was:
{noformat}
testWriteFlowRunMinMax(org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun)
  Time elapsed: 0.088 sec  <<< FAILURE!
java.lang.AssertionError: expected:<142502690> but was:<1425026901000>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun.testWriteFlowRunMinMax(TestHBaseStorageFlowRun.java:237)
{noformat}


> Supplement timestamp for min start/max end time columns in flow run table to 
> avoid overwrite
> 
>
> Key: YARN-6874
> URL: https://issues.apache.org/jira/browse/YARN-6874
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Vrushali C
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-6874-YARN-5355.0001.patch
>
>
> Following test case is failing in YARN-5355 branch.
> This is coming because we are not supplementing the timestamp for 
> FlowRunColumn i.e. min_start_time and max_end_time columns, post YARN-6850 
> which can lead to a clash, if 2 writes for app created events happen at the 
> same time, which is true for this test case.
> To fix this, we need to pass true flag into ColumnHelper constructor. I did 
> encounter this failure once earlier too.
> {noformat}
> testWriteFlowRunMinMax(org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun)
>   Time elapsed: 0.088 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<142502690> but was:<1425026901000>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun.testWriteFlowRunMinMax(TestHBaseStorageFlowRun.java:237)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6958) Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice

2017-08-10 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121596#comment-16121596
 ] 

Akira Ajisaka commented on YARN-6958:
-

+1, checking this in.

> Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice
> ---
>
> Key: YARN-6958
> URL: https://issues.apache.org/jira/browse/YARN-6958
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6958.001.patch, YARN-6958.002.patch, 
> YARN-6958-branch-2.001.patch
>
>
> This jira moves logging APIS over to slf4j in the following modules:
> {code}
>  hadoop-yarn-server-timeline-pluginstorage 
> hadoop-yarn-server-timelineservice
> hadoop-yarn-server-timelineservice-hbase
> hadoop-yarn-server-timelineservice-hbase-tests 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-65) Reduce RM app memory footprint once app has completed

2017-08-10 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-65?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121747#comment-16121747
 ] 

Rohith Sharma K S commented on YARN-65:
---

While offline discussion with [~Naganarasimha], he pointed out clearing fields 
are done after application is finished. Thats reasonable.
I think we can improve it setting AMContainerSpec to null rather than setting 
individual fields. 

> Reduce RM app memory footprint once app has completed
> -
>
> Key: YARN-65
> URL: https://issues.apache.org/jira/browse/YARN-65
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 0.23.3
>Reporter: Jason Lowe
>Assignee: Manikandan R
> Attachments: YARN-65.001.patch, YARN-65.002.patch, YARN-65.003.patch, 
> YARN-65.004.patch, YARN-65.005.patch, YARN-65.006.patch, YARN-65.007.patch
>
>
> The ResourceManager holds onto a configurable number of completed 
> applications (yarn.resource.max-completed-applications, defaults to 1), 
> and the memory footprint of these completed applications can be significant.  
> For example, the {{submissionContext}} in RMAppImpl contains references to 
> protocolbuffer objects and other items that probably aren't necessary to keep 
> around once the application has completed.  We could significantly reduce the 
> memory footprint of the RM by releasing objects that are no longer necessary 
> once an application completes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6885) AllocationFileLoaderService.loadQueue() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-10 Thread Yu-Tang Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu-Tang Lin updated YARN-6885:
--
Attachment: YARN-6885.004.patch

> AllocationFileLoaderService.loadQueue() should use a switch statement in the 
> main tag parsing loop instead of the if/else-if/...
> 
>
> Key: YARN-6885
> URL: https://issues.apache.org/jira/browse/YARN-6885
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6885.003.patch, YARN-6885.004.patch
>
>
> {code}  if ("minResources".equals(field.getTagName())) {
> String text = ((Text)field.getFirstChild()).getData().trim();
> Resource val =
> FairSchedulerConfiguration.parseResourceConfigValue(text);
> minQueueResources.put(queueName, val);
>   } else if ("maxResources".equals(field.getTagName())) {
>   ...{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5978) ContainerScheduler and Container state machine changes to support ExecType update

2017-08-10 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121781#comment-16121781
 ] 

Arun Suresh edited comment on YARN-5978 at 8/10/17 3:21 PM:


Thanks for the patch [~kartheek].

Couple of nits:
* I think you can rollback the changes to hadoop-yarn-server-common/pom.xml
* In {{TestAMRMClient}}, some of the changes like the one in line 994, when the 
timeout is commented can be rolled-back.

Also, in {{ContainerShceduler::onUpdateContainer()}} method, you should add 
{{killOpportunisticContainers(updateEvent.getContainer())}} before line 203 to 
ensure that any running opportunistic containers are killed to make room for 
this promoted container. Which reminds me:
I think we need 1 more test-case. probably in {{TestContainerSchedulerQueuing}} 
- to test the above.
Essentially, we should have a situation where we have a full NM with a bunch of 
Opportunistic containers running and some Opp containers queued. Then the 
testcase should promote a queued Opp container, we should verify that it starts 
running - and one/more of the running opportunistic containers is/are killed to 
make room.


was (Author: asuresh):
Thanks for the patch [~kartheek].

Couple of nits:
* I think you can rollback the changes to hadoop-yarn-server-common/pom.xml
* In {{TestAMRMClient}}, some of the changes like the one in line 994, when the 
timeout is commented can be rolled-back.

Also, in {{ContainerShceduler::onUpdateContainer()}} method, you should add 
{{killOpportunisticContainers()}} before line 203 to ensure that any running 
opportunistic containers are killed to make room for this promoted container. 
Which reminds me:
I think we need 1 more test-case. probably in {{TestContainerSchedulerQueuing}} 
- to test the above.
Essentially, we should have a situation where we have a full NM with a bunch of 
Opportunistic containers running and some Opp containers queued. Then the 
testcase should promote a queued Opp container, we should verify that it starts 
running - and one/more of the running opportunistic containers is/are killed to 
make room.

> ContainerScheduler and Container state machine changes to support ExecType 
> update
> -
>
> Key: YARN-5978
> URL: https://issues.apache.org/jira/browse/YARN-5978
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
> Attachments: YARN-5978.001.patch, YARN-5978.002.patch
>
>
> ContainerScheduler should support updateContainer API for
> - Container Resource update
> - ExecType update that can change an opportunistic to guaranteed and 
> vice-versa
> Adding a new ContainerState event, UpdateContainerStateEvent to support 
> UPDATE_CONTAINER call from RM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6905) Multiple test failures due to FastNumberFormat

2017-08-10 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121863#comment-16121863
 ] 

Haibo Chen commented on YARN-6905:
--

I am more supportive of overriding Application.toString() internally. Will 
upload a patch based on that idea. Do you see major issues of upgrading to 
HBase 2.0 once it comes out? 

> Multiple test failures due to FastNumberFormat
> --
>
> Key: YARN-6905
> URL: https://issues.apache.org/jira/browse/YARN-6905
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 3.0.0-beta1
> Environment: Ubuntu 14.04 
> x86, ppc64le
> $ java -version
> openjdk version "1.8.0_111"
> OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14)
> OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
>Reporter: Sonia Garudi
>Assignee: Haibo Chen
>
> There are multiple test failing in Hadoop YARN Timeline Service HBase tests 
> project with the following error :
> {code}
> java.lang.NoClassDefFoundError: org/apache/hadoop/util/FastNumberFormat
> at 
> org.apache.hadoop.yarn.api.records.ApplicationId.toString(ApplicationId.java:104)
> {code}
> Below are the failing tests :
> {code}
>   TestHBaseTimelineStorageApps.testWriteApplicationToHBase
>   TestHBaseTimelineStorageApps.testEvents
>   TestHBaseTimelineStorageEntities.testEventsEscapeTs
>   TestHBaseTimelineStorageEntities.testWriteEntityToHBase
>   TestHBaseTimelineStorageEntities.testEventsWithEmptyInfo
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-08-10 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-6610:
---
Attachment: YARN-6610.YARN-3926.004.patch

Refactored a little to make it cleaner.

> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-6610.001.patch, YARN-6610.YARN-3926.002.patch, 
> YARN-6610.YARN-3926.003.patch, YARN-6610.YARN-3926.004.patch
>
>
> The {{dominant}} param assumes there are only two resources, i.e. true means 
> to compare the dominant, and false means to compare the subordinate.  Now 
> that there are _n_ resources, this parameter no longer makes sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6885) AllocationFileLoaderService.loadQueue() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121886#comment-16121886
 ] 

Hadoop QA commented on YARN-6885:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 32 new + 17 unchanged - 1 fixed = 49 total (was 18) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 18s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6885 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881261/YARN-6885.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 570242dfd0e3 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8d953c2 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16824/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16824/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16824/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-10 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121887#comment-16121887
 ] 

Rohith Sharma K S commented on YARN-6820:
-

bq. The whole point of this JIRA is to block all users from seeing the data in 
the ATS
thanks for correcting it:-) I was overlooked the patch.

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch, 
> YARN-6820-YARN-5355.004.patch, YARN-6820-YARN-5355.005.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6885) AllocationFileLoaderService.loadQueue() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-10 Thread Yu-Tang Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121920#comment-16121920
 ] 

Yu-Tang Lin edited comment on YARN-6885 at 8/10/17 5:08 PM:


keep fixing the indent issue.


was (Author: yu-tang lin):
fix the missing indent issue.

> AllocationFileLoaderService.loadQueue() should use a switch statement in the 
> main tag parsing loop instead of the if/else-if/...
> 
>
> Key: YARN-6885
> URL: https://issues.apache.org/jira/browse/YARN-6885
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6885.005.patch
>
>
> {code}  if ("minResources".equals(field.getTagName())) {
> String text = ((Text)field.getFirstChild()).getData().trim();
> Resource val =
> FairSchedulerConfiguration.parseResourceConfigValue(text);
> minQueueResources.put(queueName, val);
>   } else if ("maxResources".equals(field.getTagName())) {
>   ...{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6885) AllocationFileLoaderService.loadQueue() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-10 Thread Yu-Tang Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu-Tang Lin updated YARN-6885:
--
Attachment: YARN-6885.005.patch

> AllocationFileLoaderService.loadQueue() should use a switch statement in the 
> main tag parsing loop instead of the if/else-if/...
> 
>
> Key: YARN-6885
> URL: https://issues.apache.org/jira/browse/YARN-6885
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6885.005.patch
>
>
> {code}  if ("minResources".equals(field.getTagName())) {
> String text = ((Text)field.getFirstChild()).getData().trim();
> Resource val =
> FairSchedulerConfiguration.parseResourceConfigValue(text);
> minQueueResources.put(queueName, val);
>   } else if ("maxResources".equals(field.getTagName())) {
>   ...{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6885) AllocationFileLoaderService.loadQueue() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-10 Thread Yu-Tang Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu-Tang Lin updated YARN-6885:
--
Attachment: (was: YARN-6885.004.patch)

> AllocationFileLoaderService.loadQueue() should use a switch statement in the 
> main tag parsing loop instead of the if/else-if/...
> 
>
> Key: YARN-6885
> URL: https://issues.apache.org/jira/browse/YARN-6885
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6885.005.patch
>
>
> {code}  if ("minResources".equals(field.getTagName())) {
> String text = ((Text)field.getFirstChild()).getData().trim();
> Resource val =
> FairSchedulerConfiguration.parseResourceConfigValue(text);
> minQueueResources.put(queueName, val);
>   } else if ("maxResources".equals(field.getTagName())) {
>   ...{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6851) Capacity Scheduler: document configs for controlling # containers allowed to be allocated per node heartbeat

2017-08-10 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated YARN-6851:
--
Attachment: YARN-6851.001.patch

> Capacity Scheduler: document configs for controlling # containers allowed to 
> be allocated per node heartbeat
> 
>
> Key: YARN-6851
> URL: https://issues.apache.org/jira/browse/YARN-6851
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: YARN-6851.001.patch
>
>
> YARN-4161 introduces new configs for controlling how many containers allowed 
> to be allocated in each node heartbeat. And we also have offswitchCount 
> config before. Would be better to document these configurations in CS section.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6974) Make CuratorBasedElectorService the default

2017-08-10 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121978#comment-16121978
 ] 

Robert Kanter commented on YARN-6974:
-

We haven't really done much testing on it either.  Given that, it sounds like 
we should wait on this until it's been tested more.

> Make CuratorBasedElectorService the default
> ---
>
> Key: YARN-6974
> URL: https://issues.apache.org/jira/browse/YARN-6974
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0-beta1
>Reporter: Robert Kanter
>Priority: Critical
>
> YARN-4438 (and cleanup in YARN-5709) added the 
> {{CuratorBasedElectorService}}, which does leader election via Curator.  The 
> intention was to leave it off by default to allow time for it to bake, and 
> eventually make it the default and remove the 
> {{ActiveStandbyElectorBasedElectorService}}.  
> We should do that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6972) Adding RM ClusterId in AppInfo

2017-08-10 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121993#comment-16121993
 ] 

Giovanni Matteo Fumarola commented on YARN-6972:


Thanks [~tanujnay] for the patch. Good job.
Few feedback:
1) Move {{String subclusterId}} after {{AppTimeoutsInfo timeouts}}. Doing the 
same things for the tests and the get method. This will avoid future conflicts.
2) In the test, instead of setting {{YarnConfiguration.DEFAULT_RM_CLUSTER_ID}} 
please set a random word. e.g. "SubCluster1". In this way, you will validate if 
your code gets the correct one and not always the default.

> Adding RM ClusterId in AppInfo
> --
>
> Key: YARN-6972
> URL: https://issues.apache.org/jira/browse/YARN-6972
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Tanuj Nayak
> Attachments: YARN-6972.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5736) YARN container executor config does not handle white space

2017-08-10 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121913#comment-16121913
 ] 

Daniel Templeton commented on YARN-5736:


I don't see an issue with it, but I'd like to hear 
[~miklos.szeg...@cloudera.com]'s thoughts.

> YARN container executor config does not handle white space
> --
>
> Key: YARN-5736
> URL: https://issues.apache.org/jira/browse/YARN-5736
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Trivial
>  Labels: oct16-medium
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN_5736.000.patch, YARN-5736.001.patch, 
> YARN-5736.002.patch, YARN-5736.addendum.000.patch
>
>
> The container executor configuration reader does not handle white spaces or 
> malformed key value pairs in the config file correctly or gracefully
> as an example the following key value line which is part of the configuration 
> (note the << is used as a marker to show the extra trailing space):
> yarn.nodemanager.linux-container-executor.group=yarn <<
> is a valid line but when you run the check over the file:
> [root@test]#./container-executor --checksetup
> Can't get group information for yarn - Success.
> [root@test]#
> It fails to find the yarn group but it really tries to find the "yarn " group 
> which fails. There is no trimming anywhere while processing the lines. If a 
> space would be added in before or after the = sign a failure would also occur.
> Minor nit is the fact that a failure still is logged as a Success



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6882) AllocationFileLoaderService.reloadAllocations() should use the diamond operator

2017-08-10 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton reassigned YARN-6882:
--

Assignee: Larry Lo

> AllocationFileLoaderService.reloadAllocations() should use the diamond 
> operator
> ---
>
> Key: YARN-6882
> URL: https://issues.apache.org/jira/browse/YARN-6882
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Larry Lo
>Priority: Trivial
>  Labels: newbie
>
> Here:{code}for (FSQueueType queueType : FSQueueType.values()) {
>   configuredQueues.put(queueType, new HashSet());
> }{code} and here:{code}List queueElements = new 
> ArrayList();{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6883) AllocationFileLoaderService.reloadAllocations() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-10 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121919#comment-16121919
 ] 

Daniel Templeton commented on YARN-6883:


I think this issue may be being subsumed into YARN-6885.

> AllocationFileLoaderService.reloadAllocations() should use a switch statement 
> in the main tag parsing loop instead of the if/else-if/...
> 
>
> Key: YARN-6883
> URL: https://issues.apache.org/jira/browse/YARN-6883
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Larry Lo
>Priority: Minor
>  Labels: newbie
>
> {code}if ("queue".equals(element.getTagName()) ||
>   "pool".equals(element.getTagName())) {
>   queueElements.add(element);
> } else if ("user".equals(element.getTagName())) {
> ...{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6883) AllocationFileLoaderService.reloadAllocations() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-10 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton reassigned YARN-6883:
--

Assignee: Larry Lo

> AllocationFileLoaderService.reloadAllocations() should use a switch statement 
> in the main tag parsing loop instead of the if/else-if/...
> 
>
> Key: YARN-6883
> URL: https://issues.apache.org/jira/browse/YARN-6883
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Larry Lo
>Priority: Minor
>  Labels: newbie
>
> {code}if ("queue".equals(element.getTagName()) ||
>   "pool".equals(element.getTagName())) {
>   queueElements.add(element);
> } else if ("user".equals(element.getTagName())) {
> ...{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6885) AllocationFileLoaderService.loadQueue() should use a switch statement in the main tag parsing loop instead of the if/else-if/...

2017-08-10 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121942#comment-16121942
 ] 

Daniel Templeton commented on YARN-6885:


Thanks for the patch, [~Yu-Tang Lin].  Here are my comments:

# I don't see the value of declaring {{text}} and {{val}} outside the _switch_. 
 If saves some characters, but it has no runtime impact, and I think it makes 
the code a less clear, especially since you're having to cast {{val}} when 
using it.
# In {code}312  case "queueMaxResourcesDefault":
313   text = ((Text)element.getFirstChild()).getData().trim();
314   val =
315 
FairSchedulerConfiguration.parseResourceConfigValue(text);
316   queueMaxResourcesDefault = (Resource)val;
317   break;{code} lines 314 and 315 can be combined.
# In {code}362if (text.equalsIgnoreCase(FifoPolicy.NAME)) {
363 throw new AllocationConfigurationException("Bad fair 
scheduler "
364 + "config file: defaultQueueSchedulingPolicy or "
365 + "defaultQueueSchedulingMode can't be FIFO.");
366}
367defaultSchedPolicy = SchedulingPolicy.parse(text);
368break;{code} lines 364 and 365 should be indented 4 more 
spaces.
# In {code}545case "weight":
546 {
547   text = ((Text)field.getFirstChild()).getData().trim();
548   double doubleVal = Double.parseDouble(text);
549   queueWeights.put(queueName, new 
ResourceWeights((float)doubleVal));
550 }
551 break;{code} Why the braces?  I don't think they're needed.
# Same here: {code}562case "fairSharePreemptionThreshold":
563 {
564   text = ((Text)field.getFirstChild()).getData().trim();
565   float floatVal = Float.parseFloat(text);
566   floatVal = Math.max(Math.min(floatVal, 1.0f), 0.0f);
567   fairSharePreemptionThresholds.put(queueName, floatVal);
568 }
569 break;{code}
# In {{AllocationFileLoaderService}}, the last two _case_ statements don't 
work, because the _if_ statements they're replacing weren't testing equality.  
You'll have to keep the _if_ statements and put them in the _default_ case.

> AllocationFileLoaderService.loadQueue() should use a switch statement in the 
> main tag parsing loop instead of the if/else-if/...
> 
>
> Key: YARN-6885
> URL: https://issues.apache.org/jira/browse/YARN-6885
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6885.005.patch
>
>
> {code}  if ("minResources".equals(field.getTagName())) {
> String text = ((Text)field.getFirstChild()).getData().trim();
> Resource val =
> FairSchedulerConfiguration.parseResourceConfigValue(text);
> minQueueResources.put(queueName, val);
>   } else if ("maxResources".equals(field.getTagName())) {
>   ...{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6969) Remove method getMinShareMemoryFraction and getPendingContainers in class FairSchedulerQueueInfo

2017-08-10 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121983#comment-16121983
 ] 

Robert Kanter commented on YARN-6969:
-

No problem.  I added [~LarryLo] as a contributor.  I've also added you as a 
committer.

> Remove method getMinShareMemoryFraction and getPendingContainers in class 
> FairSchedulerQueueInfo
> 
>
> Key: YARN-6969
> URL: https://issues.apache.org/jira/browse/YARN-6969
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: fairscheduler
>Reporter: Yufei Gu
>Priority: Trivial
>  Labels: newbie++
>
> They are not used anymore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6789) new api to get all supported resources from RM

2017-08-10 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121998#comment-16121998
 ] 

Wangda Tan commented on YARN-6789:
--

[~dan...@cloudera.com],  thanks for sharing your thoughts. Please see my 
response below:

bq. If I understand correctly, what you're proposing is making the units a 
global dictate. The RM says that memory has the unit "Mi", therefore all NMs 
and clients must use that unit. 
This is true, unit of values for different resource types are fixed. To be more 
clear, I propose to remove the "unit" from ResourceInformation object. "Unit" 
for each resource types are unwritten rules which each daemon/clients/am should 
know that. "Unit" can be set for ResourceTypeInformation (by setting 
resource-types.xml), however it is more like documentation instead of affecting 
internal logics.

bq. What happens if an admin changes the units for a resource type? 
This is not allowed in my proposal, I doubt that in what scenario we need to 
change global units. For example, MB as unit for memory works so well today, I 
don't think it make sense to upgrade it to GB/TB since value is a 64 bits 
integer.
This is majorly to avoid dealing with backward compatibility and inconsistency 
between daemons and clients. As I stated above, what if an old client talk to 
RM asks 4096 memory without specifying "MB" as unit? If we make it to be 
globally fixed and unwritten rule, we don't need to worry about inconsistency 
any more. And all rolling upgrade problems are gone for the unit. 

Thoughts?

> new api to get all supported resources from RM
> --
>
> Key: YARN-6789
> URL: https://issues.apache.org/jira/browse/YARN-6789
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-6789-YARN-3926.001.patch
>
>
> It will be better to provide an api to get all supported resource types from 
> RM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6905) Multiple HBaseTimelineStorage test failures due to missing FastNumberFormat

2017-08-10 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122239#comment-16122239
 ] 

Vrushali C commented on YARN-6905:
--

Thanks [~haibo.chen] for the patch. 

- I think the javadoc errors are related. 

- Also, if possible, could you please add a test case that invokes 
ApplicationId.fromString on convertApplicationIdToString and checks if the 
returned app id object and the input app id object are equal? 

- I would prefer not to have "Gross hack!" in the function documentation too. 
Perhaps reword it to remove that, keep the  rest of the description as you 
already have on the function documentation and add in "This is a work-around 
implementation as discussed in YARN-6905" or something after it? 


> Multiple HBaseTimelineStorage test failures due to missing FastNumberFormat
> ---
>
> Key: YARN-6905
> URL: https://issues.apache.org/jira/browse/YARN-6905
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 3.0.0-beta1
> Environment: Ubuntu 14.04 
> x86, ppc64le
> $ java -version
> openjdk version "1.8.0_111"
> OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14)
> OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
>Reporter: Sonia Garudi
>Assignee: Haibo Chen
> Attachments: YARN-6905.00.patch
>
>
> There are multiple test failing in Hadoop YARN Timeline Service HBase tests 
> project with the following error :
> {code}
> java.lang.NoClassDefFoundError: org/apache/hadoop/util/FastNumberFormat
> at 
> org.apache.hadoop.yarn.api.records.ApplicationId.toString(ApplicationId.java:104)
> {code}
> Below are the failing tests :
> {code}
>   TestHBaseTimelineStorageApps.testWriteApplicationToHBase
>   TestHBaseTimelineStorageApps.testEvents
>   TestHBaseTimelineStorageEntities.testEventsEscapeTs
>   TestHBaseTimelineStorageEntities.testWriteEntityToHBase
>   TestHBaseTimelineStorageEntities.testEventsWithEmptyInfo
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6969) Remove method getMinShareMemoryFraction and getPendingContainers in class FairSchedulerQueueInfo

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122263#comment-16122263
 ] 

Hadoop QA commented on YARN-6969:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
12s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 351 unchanged - 1 fixed = 351 total (was 352) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 47s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Unread field:FairSchedulerQueueInfo.java:[line 112] |
|  |  Unread field:FairSchedulerQueueInfo.java:[line 117] |
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
|   | org.apache.hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6969 |
| GITHUB PR | https://github.com/apache/hadoop/pull/260 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5805f86403bd 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 312e57b |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Commented] (YARN-6980) The YARN_TIMELINE_HEAPSIZE does not work

2017-08-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122265#comment-16122265
 ] 

Allen Wittenauer commented on YARN-6980:


This is backward compatibility bits, so the variable should be 
YARN_TIMELINESERVER_HEAPSIZE as it was in branch-2.  Just looks like a mistake 
in yarn-env.sh.



> The YARN_TIMELINE_HEAPSIZE does not work
> 
>
> Key: YARN-6980
> URL: https://issues.apache.org/jira/browse/YARN-6980
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Jinjiang Ling
>Assignee: Jinjiang Ling
>Priority: Minor
> Attachments: YARN-6980.patch
>
>
> As I want to set the java heap size of timeline server, I find there is a 
> variable named YARN_TIMELINE_HEAPSIZE in the comments of yarn-env.sh, then I 
> set it but does not work.
> Then I find the variable is changed in 
> -[HADOOP-10950|https://issues.apache.org/jira/browse/HADOOP-10950]-, 
> but it is just changed in the comments of yarn-env.sh and don't in "yarn" and 
> "yarn.cmd" command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6980) The YARN_TIMELINE_HEAPSIZE does not work

2017-08-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122265#comment-16122265
 ] 

Allen Wittenauer edited comment on YARN-6980 at 8/10/17 8:27 PM:
-

This is backward compatibility bits, so the variable should be 
YARN_TIMELINESERVER_HEAPSIZE as it was in branch-2.  Just looks like a mistake 
in yarn-env.sh.

It's worth pointing out that the *preferred* way to set memory for specific 
daemons in 3.x is to put the -Xmx in _OPTS


was (Author: aw):
This is backward compatibility bits, so the variable should be 
YARN_TIMELINESERVER_HEAPSIZE as it was in branch-2.  Just looks like a mistake 
in yarn-env.sh.



> The YARN_TIMELINE_HEAPSIZE does not work
> 
>
> Key: YARN-6980
> URL: https://issues.apache.org/jira/browse/YARN-6980
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Jinjiang Ling
>Assignee: Jinjiang Ling
>Priority: Minor
> Attachments: YARN-6980.patch
>
>
> As I want to set the java heap size of timeline server, I find there is a 
> variable named YARN_TIMELINE_HEAPSIZE in the comments of yarn-env.sh, then I 
> set it but does not work.
> Then I find the variable is changed in 
> -[HADOOP-10950|https://issues.apache.org/jira/browse/HADOOP-10950]-, 
> but it is just changed in the comments of yarn-env.sh and don't in "yarn" and 
> "yarn.cmd" command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6852) [YARN-6223] Native code changes to support isolate GPU devices by using CGroups

2017-08-10 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6852:
-
Attachment: YARN-6852.007.patch

Attached 007 patch, fixed more cc warnings, will try to fix all cc warnings in 
the next patch.

> [YARN-6223] Native code changes to support isolate GPU devices by using 
> CGroups
> ---
>
> Key: YARN-6852
> URL: https://issues.apache.org/jira/browse/YARN-6852
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6852.001.patch, YARN-6852.002.patch, 
> YARN-6852.003.patch, YARN-6852.004.patch, YARN-6852.005.patch, 
> YARN-6852.006.patch, YARN-6852.007.patch
>
>
> This JIRA plan to add support of:
> 1) Isolation in CGroups. (native side).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6620) [YARN-6223] NM Java side code changes to support isolate GPU devices by using CGroups

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122286#comment-16122286
 ] 

Hadoop QA commented on YARN-6620:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
54s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  7s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 22 new + 457 unchanged - 0 fixed = 479 total (was 457) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
9s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
32s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 1 new + 105 unchanged - 0 fixed = 106 total (was 105) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 42s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 34s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
36s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
|  |  Boxing/unboxing to parse a primitive 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.GpuResourceAllocator.recoverAssignedGpus(ContainerId)
  At 
GpuResourceAllocator.java:org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.GpuResourceAllocator.recoverAssignedGpus(ContainerId)
  At GpuResourceAllocator.java:[line 92] |
|  |  Boxing/unboxing to parse a primitive 

[jira] [Commented] (YARN-6972) Adding RM ClusterId in AppInfo

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122302#comment-16122302
 ] 

Hadoop QA commented on YARN-6972:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 5 new + 92 unchanged - 2 fixed = 97 total (was 94) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 12s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor |
|   | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification |
|   | hadoop.yarn.server.resourcemanager.TestRMHA |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebAppFairScheduler |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6972 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881309/YARN-6972.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2fbad27afbc6 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 312e57b |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16838/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16838/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 

[jira] [Commented] (YARN-5536) Multiple format support (JSON, etc.) for exclude node file in NM graceful decommission with timeout

2017-08-10 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122304#comment-16122304
 ] 

Junping Du commented on YARN-5536:
--

My current priority on related effort is still on YARN-5464. [~mingma], do you 
have bandwidth on this?

> Multiple format support (JSON, etc.) for exclude node file in NM graceful 
> decommission with timeout
> ---
>
> Key: YARN-5536
> URL: https://issues.apache.org/jira/browse/YARN-5536
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful
>Reporter: Junping Du
>Priority: Blocker
>
> Per discussion in YARN-4676, we agree that multiple format (other than xml) 
> should be supported to decommission nodes with timeout values.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6987) Log app attempt during InvalidStateTransition

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122318#comment-16122318
 ] 

Hadoop QA commented on YARN-6987:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 34s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6987 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881308/YARN-6987.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8a17f2a23aac 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 312e57b |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16837/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16837/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16837/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Log app attempt during 

[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-10 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122324#comment-16122324
 ] 

Jason Lowe commented on YARN-6820:
--

Sure, let's file a separate JIRA to get consistent about how we're getting the 
remote user.  It certainly doesn't make sense to have some parts of YARN 
getting the remote user from the request while others are getting the name of 
the principal.

With that issue out of the way, I'm +1 on the latest patch.  I'll commit this 
tomorrow to the YARN-5355 branch if there are no objections.

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch, 
> YARN-6820-YARN-5355.004.patch, YARN-6820-YARN-5355.005.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6988) container-executor fails for docker when command length > 4096 B

2017-08-10 Thread Eric Badger (JIRA)
Eric Badger created YARN-6988:
-

 Summary: container-executor fails for docker when command length > 
4096 B
 Key: YARN-6988
 URL: https://issues.apache.org/jira/browse/YARN-6988
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Eric Badger
Assignee: Eric Badger


{{run_docker}} and {{launch_docker_container_as_user}} allocate their command 
arrays using EXECUTOR_PATH_MAX, which is hardcoded to 4096 in configuration.h. 
Because of this, the full docker command can only be 4096 characters. If it is 
longer, it will be truncated and the command will fail with a parsing error. 
Because of the bind-mounting of volumes, the arguments to the docker command 
can quickly get large. For example, I passed the 4096 limit with an 11 disk 
node. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6988) container-executor fails for docker when command length > 4096 B

2017-08-10 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122348#comment-16122348
 ] 

Eric Badger commented on YARN-6988:
---

We should definitely increase the limit, but don't want to malloc the entire 
max amount of memory that the system allows, since that would be a waste. A 
compromise could be to set the arg length to the min of the system value and 
128K. 

> container-executor fails for docker when command length > 4096 B
> 
>
> Key: YARN-6988
> URL: https://issues.apache.org/jira/browse/YARN-6988
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Badger
>Assignee: Eric Badger
>
> {{run_docker}} and {{launch_docker_container_as_user}} allocate their command 
> arrays using EXECUTOR_PATH_MAX, which is hardcoded to 4096 in 
> configuration.h. Because of this, the full docker command can only be 4096 
> characters. If it is longer, it will be truncated and the command will fail 
> with a parsing error. Because of the bind-mounting of volumes, the arguments 
> to the docker command can quickly get large. For example, I passed the 4096 
> limit with an 11 disk node. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6896) Federation: routing REST invocations transparently to multiple RMs (part 1 - basic execution)

2017-08-10 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122362#comment-16122362
 ] 

Carlo Curino commented on YARN-6896:


Thanks [~giovanni.fumarola] your answers make sense, and the patch looks good 
to me, with only a couple of things on testing/validation:

# In {{TestFederationInterceptorREST.testSubmitApplication}} (and similar 
methods) you don't check that the request was actually sent to the RM, but only 
that it was book-keeped in the {{FederationStateStore}}, i.e., we only check 
half of the functionality. Since you have your own mock class 
{{MockDefaultFederationInterceptorREST}}, you can have a global static variable 
in the class that you for example increment for each invocation, or you can use 
Mockito {{spy()}} and make sure that one-and-only-one of the 
{{MockDefaultFederationInterceptorREST}} is invoked in normal cases, and that 
for retry scenarios that the right set of invocations is performed (e.g., 
counting separately good/bad SC invocations?). Something along this line would 
strengthen the tests. 
# Did you run this in a live cluster? Not strictly needed, but since the tests 
are "local" to this interceptor a bit of integration validation could help us 
be confident about it.

> Federation: routing REST invocations transparently to multiple RMs (part 1 - 
> basic execution)
> -
>
> Key: YARN-6896
> URL: https://issues.apache.org/jira/browse/YARN-6896
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-6896.proto.patch, YARN-6896.v1.patch, 
> YARN-6896.v2.patch
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> RMWebServicesProtocol requests to the appropriate RM(s) in a federated YARN 
> cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6044) Resource bar of Capacity Scheduler UI does not show correctly

2017-08-10 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122376#comment-16122376
 ] 

Junping Du commented on YARN-6044:
--

Is this related to YARN-4484? cc [~sunilg].

> Resource bar of Capacity Scheduler UI does not show correctly
> -
>
> Key: YARN-6044
> URL: https://issues.apache.org/jira/browse/YARN-6044
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.8.0
>Reporter: Tao Yang
>Priority: Minor
>
> Test Environment:
> 1. NodeLable
> yarn rmadmin -addToClusterNodeLabels "label1(exclusive=false)"
> 2. capacity-scheduler.xml
> yarn.scheduler.capacity.root.queues=a,b
> yarn.scheduler.capacity.root.a.capacity=60
> yarn.scheduler.capacity.root.b.capacity=40
> yarn.scheduler.capacity.root.a.accessible-node-labels=label1
> yarn.scheduler.capacity.root.accessible-node-labels.label1.capacity=100
> yarn.scheduler.capacity.root.a.accessible-node-labels.label1.capacity=100
> In this test case, for queue(root.b) in partition(label1), the resource 
> bar(represents absolute-max-capacity) should be 100%(default). The scheduler 
> UI shows correctly after RM started, but when I started an app in 
> queue(root.b) and partition(label1) , the resource bar of this queue is 
> changed from 100% to 0%. 
> For corrent queue(root.a), the queueCapacities of partition(label1) was 
> inited in ParentQueue/LeafQueue constructor and 
> max-capacity/absolute-max-capacity were setted with correct value, due to 
> yarn.scheduler.capacity.root.a.accessible-node-labels is defined in 
> capacity-scheduler.xml
> For incorrent queue(root.b), the queueCapacities of partition(label1) didn't 
> exist at first, the max-capacity and absolute-max-capacity were setted with 
> default value(100%) in PartitionQueueCapacitiesInfo so that Scheduler UI 
> could show correctly. When this queue was allocating resource for 
> partition(label1), the queueCapacities of partition(label1) was created and 
> only used-capacity and absolute-used-capacity were setted in 
> AbstractCSQueue#allocateResource. max-capacity and absolute-max-capacity have 
> to use float default value 0 which are defined in QueueCapacities$Capacities. 
> Whether max-capacity and absolute-max-capacity should have default 
> value(100%)  in Capacities constructor to avoid losing default value if  
> somewhere called not given?  
> Please feel free to give your suggestions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6905) Multiple HBaseTimelineStorage test failures due to missing FastNumberFormat

2017-08-10 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6905:
-
Attachment: YARN-6905.01.patch

Thanks for the review, [~vrushalic]. I have updated the patch accordingly

> Multiple HBaseTimelineStorage test failures due to missing FastNumberFormat
> ---
>
> Key: YARN-6905
> URL: https://issues.apache.org/jira/browse/YARN-6905
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 3.0.0-beta1
> Environment: Ubuntu 14.04 
> x86, ppc64le
> $ java -version
> openjdk version "1.8.0_111"
> OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14)
> OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
>Reporter: Sonia Garudi
>Assignee: Haibo Chen
> Attachments: YARN-6905.00.patch, YARN-6905.01.patch
>
>
> There are multiple test failing in Hadoop YARN Timeline Service HBase tests 
> project with the following error :
> {code}
> java.lang.NoClassDefFoundError: org/apache/hadoop/util/FastNumberFormat
> at 
> org.apache.hadoop.yarn.api.records.ApplicationId.toString(ApplicationId.java:104)
> {code}
> Below are the failing tests :
> {code}
>   TestHBaseTimelineStorageApps.testWriteApplicationToHBase
>   TestHBaseTimelineStorageApps.testEvents
>   TestHBaseTimelineStorageEntities.testEventsEscapeTs
>   TestHBaseTimelineStorageEntities.testWriteEntityToHBase
>   TestHBaseTimelineStorageEntities.testEventsWithEmptyInfo
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6852) [YARN-6223] Native code changes to support isolate GPU devices by using CGroups

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122382#comment-16122382
 ] 

Hadoop QA commented on YARN-6852:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m  
0s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 39s{color} | 
{color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 25 new + 0 unchanged - 0 fixed = 25 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m  
6s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6852 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881319/YARN-6852.007.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux e62e4e5b9086 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 312e57b |
| Default Java | 1.8.0_144 |
| cc | 
https://builds.apache.org/job/PreCommit-YARN-Build/16840/artifact/patchprocess/diff-compile-cc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16840/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16840/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-6223] Native code changes to support isolate GPU devices by using 
> CGroups
> ---
>
> Key: YARN-6852
> URL: https://issues.apache.org/jira/browse/YARN-6852
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6852.001.patch, YARN-6852.002.patch, 
> YARN-6852.003.patch, YARN-6852.004.patch, YARN-6852.005.patch, 
> YARN-6852.006.patch, YARN-6852.007.patch
>
>
> This JIRA plan to add support of:
> 1) Isolation in CGroups. (native side).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6851) Capacity Scheduler: document configs for controlling # containers allowed to be allocated per node heartbeat

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122389#comment-16122389
 ] 

Hadoop QA commented on YARN-6851:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
52s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6851 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881274/YARN-6851.001.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 4410b1f7f4e9 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 312e57b |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16841/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Capacity Scheduler: document configs for controlling # containers allowed to 
> be allocated per node heartbeat
> 
>
> Key: YARN-6851
> URL: https://issues.apache.org/jira/browse/YARN-6851
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: YARN-6851.001.patch
>
>
> YARN-4161 introduces new configs for controlling how many containers allowed 
> to be allocated in each node heartbeat. And we also have offswitchCount 
> config before. Would be better to document these configurations in CS section.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6834) A container request with only racks specified and relax locality set to false is never honoured

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122403#comment-16122403
 ] 

Hadoop QA commented on YARN-6834:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 55s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 4 new + 44 unchanged - 0 fixed = 48 total (was 44) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m  6s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
30s{color} | {color:green} hadoop-yarn-server-tests in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
36s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6834 |
| JIRA Patch URL | 

[jira] [Updated] (YARN-6668) Use cgroup to get container resource utilization

2017-08-10 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6668:
-
Attachment: YARN-6668.008.patch

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-6668
> URL: https://issues.apache.org/jira/browse/YARN-6668
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
> Attachments: YARN-6668.000.patch, YARN-6668.001.patch, 
> YARN-6668.002.patch, YARN-6668.003.patch, YARN-6668.004.patch, 
> YARN-6668.005.patch, YARN-6668.006.patch, YARN-6668.007.patch, 
> YARN-6668.008.patch
>
>
> Container Monitor relies on proc file system to get container resource 
> utilization, which is not as efficient as reading cgroup accounting. We 
> should in NM, when cgroup is enabled, read cgroup stats instead. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6987) Log app attempt during InvalidStateTransition

2017-08-10 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122425#comment-16122425
 ] 

Jonathan Eagles commented on YARN-6987:
---

Test failures are unrelated to this patch.

> Log app attempt during InvalidStateTransition
> -
>
> Key: YARN-6987
> URL: https://issues.apache.org/jira/browse/YARN-6987
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Attachments: YARN-6987.001.patch
>
>
> Found this InvalidStateTransition logged in the resource manager log file 
> with no way to determine exactly which app attempt this was associated with.
> {noformat}
> 2017-08-04 17:22:29,895 [AsyncDispatcher event handler] ERROR 
> attempt.RMAppAttemptImpl: Can't handle this event at current state
> org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
> UNREGISTERED at LAUNCHED
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:802)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:108)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:803)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:784)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-10 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122427#comment-16122427
 ] 

Vrushali C commented on YARN-6820:
--

Thanks [~jlowe] , filed YARN-6989

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch, 
> YARN-6820-YARN-5355.004.patch, YARN-6820-YARN-5355.005.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6668) Use cgroup to get container resource utilization

2017-08-10 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6668:
-
Attachment: (was: YARN-6668.008.patch)

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-6668
> URL: https://issues.apache.org/jira/browse/YARN-6668
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
> Attachments: YARN-6668.000.patch, YARN-6668.001.patch, 
> YARN-6668.002.patch, YARN-6668.003.patch, YARN-6668.004.patch, 
> YARN-6668.005.patch, YARN-6668.006.patch, YARN-6668.007.patch
>
>
> Container Monitor relies on proc file system to get container resource 
> utilization, which is not as efficient as reading cgroup accounting. We 
> should in NM, when cgroup is enabled, read cgroup stats instead. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6989) Ensure timeline service v2 codebase gets UGI from HttpServletRequest in a consistent way

2017-08-10 Thread Vrushali C (JIRA)
Vrushali C created YARN-6989:


 Summary: Ensure timeline service v2 codebase gets UGI from 
HttpServletRequest in a consistent way
 Key: YARN-6989
 URL: https://issues.apache.org/jira/browse/YARN-6989
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Vrushali C



As noticed during discussions in YARN-6820, the webservices in timeline service 
v2 get the UGI created from the user obtained by invoking getRemoteUser on the 
HttpServletRequest . 

It will be good to use getUserPrincipal instead of invoking getRemoteUser on 
the HttpServletRequest. 

Filing jira to update the code. 

Per Java EE documentations for 6 and 7, the behavior around getRemoteUser and 
getUserPrincipal is listed at:

http://docs.oracle.com/javaee/6/tutorial/doc/gjiie.html#bncba
https://docs.oracle.com/javaee/7/tutorial/security-webtier003.htm

{code}
getRemoteUser, which determines the user name with which the client 
authenticated. The getRemoteUser method returns the name of the remote user 
(the caller) associated by the container with the request. If no user has been 
authenticated, this method returns null.

getUserPrincipal, which determines the principal name of the current user and 
returns a java.security.Principal object. If no user has been authenticated, 
this method returns null. Calling the getName method on the Principal returned 
by getUserPrincipal returns the name of the remote user.
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6668) Use cgroup to get container resource utilization

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122437#comment-16122437
 ] 

Hadoop QA commented on YARN-6668:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} YARN-6668 does not apply to YARN-1011. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6668 |
| GITHUB PR | https://github.com/apache/hadoop/pull/241 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16843/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Use cgroup to get container resource utilization
> 
>
> Key: YARN-6668
> URL: https://issues.apache.org/jira/browse/YARN-6668
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
> Attachments: YARN-6668.000.patch, YARN-6668.001.patch, 
> YARN-6668.002.patch, YARN-6668.003.patch, YARN-6668.004.patch, 
> YARN-6668.005.patch, YARN-6668.006.patch, YARN-6668.007.patch
>
>
> Container Monitor relies on proc file system to get container resource 
> utilization, which is not as efficient as reading cgroup accounting. We 
> should in NM, when cgroup is enabled, read cgroup stats instead. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6668) Use cgroup to get container resource utilization

2017-08-10 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6668:
-
Attachment: YARN-6668.008.patch

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-6668
> URL: https://issues.apache.org/jira/browse/YARN-6668
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
> Attachments: YARN-6668.000.patch, YARN-6668.001.patch, 
> YARN-6668.002.patch, YARN-6668.003.patch, YARN-6668.004.patch, 
> YARN-6668.005.patch, YARN-6668.006.patch, YARN-6668.007.patch, 
> YARN-6668.008.patch
>
>
> Container Monitor relies on proc file system to get container resource 
> utilization, which is not as efficient as reading cgroup accounting. We 
> should in NM, when cgroup is enabled, read cgroup stats instead. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6668) Use cgroup to get container resource utilization

2017-08-10 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6668:
-
Attachment: (was: YARN-6668.008.patch)

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-6668
> URL: https://issues.apache.org/jira/browse/YARN-6668
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
> Attachments: YARN-6668.000.patch, YARN-6668.001.patch, 
> YARN-6668.002.patch, YARN-6668.003.patch, YARN-6668.004.patch, 
> YARN-6668.005.patch, YARN-6668.006.patch, YARN-6668.007.patch
>
>
> Container Monitor relies on proc file system to get container resource 
> utilization, which is not as efficient as reading cgroup accounting. We 
> should in NM, when cgroup is enabled, read cgroup stats instead. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6668) Use cgroup to get container resource utilization

2017-08-10 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6668:
-
Attachment: YARN-6668.008.patch

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-6668
> URL: https://issues.apache.org/jira/browse/YARN-6668
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
> Attachments: YARN-6668.000.patch, YARN-6668.001.patch, 
> YARN-6668.002.patch, YARN-6668.003.patch, YARN-6668.004.patch, 
> YARN-6668.005.patch, YARN-6668.006.patch, YARN-6668.007.patch, 
> YARN-6668.008.patch
>
>
> Container Monitor relies on proc file system to get container resource 
> utilization, which is not as efficient as reading cgroup accounting. We 
> should in NM, when cgroup is enabled, read cgroup stats instead. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6905) Multiple HBaseTimelineStorage test failures due to missing FastNumberFormat

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122462#comment-16122462
 ] 

Hadoop QA commented on YARN-6905:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 1 new + 
6 unchanged - 0 fixed = 7 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
14s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6905 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881338/YARN-6905.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 50db3c34a5fd 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 312e57b |
| Default Java | 

[jira] [Commented] (YARN-6668) Use cgroup to get container resource utilization

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122474#comment-16122474
 ] 

Hadoop QA commented on YARN-6668:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} YARN-6668 does not apply to YARN-1011. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6668 |
| GITHUB PR | https://github.com/apache/hadoop/pull/241 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16844/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Use cgroup to get container resource utilization
> 
>
> Key: YARN-6668
> URL: https://issues.apache.org/jira/browse/YARN-6668
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
> Attachments: YARN-6668.000.patch, YARN-6668.001.patch, 
> YARN-6668.002.patch, YARN-6668.003.patch, YARN-6668.004.patch, 
> YARN-6668.005.patch, YARN-6668.006.patch, YARN-6668.007.patch, 
> YARN-6668.008.patch
>
>
> Container Monitor relies on proc file system to get container resource 
> utilization, which is not as efficient as reading cgroup accounting. We 
> should in NM, when cgroup is enabled, read cgroup stats instead. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6903) Yarn-native-service framework core rewrite

2017-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122494#comment-16122494
 ] 

Hadoop QA commented on YARN-6903:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 62 new or modified test 
files. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
2s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
57s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
38s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
46s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
33s{color} | {color:green} yarn-native-services passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
51s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
16s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  7m  0s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 3 new + 133 unchanged - 
5 fixed = 136 total (was 138) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 44s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 518 new + 1523 unchanged - 405 fixed = 2041 total (was 1928) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
26s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
13s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 26 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
2s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
23s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
40s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-yarn-slider in the patch failed. 

[jira] [Created] (YARN-6990) AmIpFilter:findRedirectUrl use HAServiceProtocol to getServiceStatus

2017-08-10 Thread yunjiong zhao (JIRA)
yunjiong zhao created YARN-6990:
---

 Summary: AmIpFilter:findRedirectUrl use HAServiceProtocol to 
getServiceStatus
 Key: YARN-6990
 URL: https://issues.apache.org/jira/browse/YARN-6990
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: yunjiong zhao


Due to we have multiple IP in ResourceManager, when try to access proxy URL 
like https://*:50030/proxy/application_1502349494018_10877/, it will failed due 
to it use HAServiceProtocol to find out which one is active RM.
{code}
2017-08-10 10:51:42,344 WARN [971256592@qtp-666312528-0] 
org.apache.hadoop.ipc.Client: Exception encountered while connecting to the 
server :
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[KERBEROS]
at 
org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:172)
at 
org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
at 
org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:563)
at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:378)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:732)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:728)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:727)
at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:378)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1492)
at org.apache.hadoop.ipc.Client.call(Client.java:1402)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy84.getServiceStatus(Unknown Source)
at 
org.apache.hadoop.ha.protocolPB.HAServiceProtocolClientSideTranslatorPB.getServiceStatus(HAServiceProtocolClientSideTranslatorPB.java:122)
at org.apache.hadoop.yarn.util.RMHAUtils.getHAState(RMHAUtils.java:68)
at 
org.apache.hadoop.yarn.util.RMHAUtils.findActiveRMHAId(RMHAUtils.java:44)
at 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:174)
at 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:138)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1243)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at 
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at 
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at 
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at 
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
{code}

This only can happen on RM have multiple IPs, related code is inside 
AmIpFilter.java doFilter function:
{code}
if (!getProxyAddresses().contains(httpReq.getRemoteAddr())) {
  String redirectUrl = findRedirectUrl();
  String target = redirectUrl + httpReq.getRequestURI();
  ProxyUtils.sendRedirect(httpReq,  httpResp,  target);
  return;
}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6991) "Kill application" button does not show error if other user tries to kill the application for secure cluster

2017-08-10 Thread Sumana Sathish (JIRA)
Sumana Sathish created YARN-6991:


 Summary: "Kill application" button does not show error if other 
user tries to kill the application for secure cluster
 Key: YARN-6991
 URL: https://issues.apache.org/jira/browse/YARN-6991
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Sumana Sathish
Assignee: Suma Shivaprasad


1. Submit an application by user 1
2. log into RM UI as user 2
3. Kill the application submitted by user 1
4. Even though application does not get killed, there is no error/info dialog 
box being shown to let the user that "user doesnot have permissions to kill 
application of other user"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6990) AmIpFilter:findRedirectUrl use HAServiceProtocol to getServiceStatus

2017-08-10 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122534#comment-16122534
 ] 

Yufei Gu commented on YARN-6990:


Which version do you use? After YARN-6625, AmIpFilter doesn't use 
HAServiceProtocol to find the active RM.

> AmIpFilter:findRedirectUrl use HAServiceProtocol to getServiceStatus
> 
>
> Key: YARN-6990
> URL: https://issues.apache.org/jira/browse/YARN-6990
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: yunjiong zhao
>
> Due to we have multiple IP in ResourceManager, when try to access proxy URL 
> like https://*:50030/proxy/application_1502349494018_10877/, it will failed 
> due to it use HAServiceProtocol to find out which one is active RM.
> {code}
> 2017-08-10 10:51:42,344 WARN [971256592@qtp-666312528-0] 
> org.apache.hadoop.ipc.Client: Exception encountered while connecting to the 
> server :
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
> via:[KERBEROS]
> at 
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:172)
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:563)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:378)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:732)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:728)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:727)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:378)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1492)
> at org.apache.hadoop.ipc.Client.call(Client.java:1402)
> at org.apache.hadoop.ipc.Client.call(Client.java:1363)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy84.getServiceStatus(Unknown Source)
> at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolClientSideTranslatorPB.getServiceStatus(HAServiceProtocolClientSideTranslatorPB.java:122)
> at org.apache.hadoop.yarn.util.RMHAUtils.getHAState(RMHAUtils.java:68)
> at 
> org.apache.hadoop.yarn.util.RMHAUtils.findActiveRMHAId(RMHAUtils.java:44)
> at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:174)
> at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:138)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1243)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> at 
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
> at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> at org.mortbay.jetty.Server.handle(Server.java:326)
> at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
> at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
> at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
> at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
> at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
> at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> {code}
> This only can happen on RM have multiple IPs, related code is inside 
> AmIpFilter.java doFilter function:
> {code}
> if 

[jira] [Created] (YARN-6992) "Kill application" button is present even if the application is FINISHED in RM UI

2017-08-10 Thread Sumana Sathish (JIRA)
Sumana Sathish created YARN-6992:


 Summary: "Kill application" button is present even if the 
application is FINISHED in RM UI
 Key: YARN-6992
 URL: https://issues.apache.org/jira/browse/YARN-6992
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Sumana Sathish
Assignee: Suma Shivaprasad






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6990) AmIpFilter:findRedirectUrl use HAServiceProtocol to getServiceStatus

2017-08-10 Thread yunjiong zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yunjiong zhao updated YARN-6990:

Affects Version/s: 2.7.0

> AmIpFilter:findRedirectUrl use HAServiceProtocol to getServiceStatus
> 
>
> Key: YARN-6990
> URL: https://issues.apache.org/jira/browse/YARN-6990
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: yunjiong zhao
>
> Due to we have multiple IP in ResourceManager, when try to access proxy URL 
> like https://*:50030/proxy/application_1502349494018_10877/, it will failed 
> due to it use HAServiceProtocol to find out which one is active RM.
> {code}
> 2017-08-10 10:51:42,344 WARN [971256592@qtp-666312528-0] 
> org.apache.hadoop.ipc.Client: Exception encountered while connecting to the 
> server :
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
> via:[KERBEROS]
> at 
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:172)
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:563)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:378)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:732)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:728)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:727)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:378)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1492)
> at org.apache.hadoop.ipc.Client.call(Client.java:1402)
> at org.apache.hadoop.ipc.Client.call(Client.java:1363)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy84.getServiceStatus(Unknown Source)
> at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolClientSideTranslatorPB.getServiceStatus(HAServiceProtocolClientSideTranslatorPB.java:122)
> at org.apache.hadoop.yarn.util.RMHAUtils.getHAState(RMHAUtils.java:68)
> at 
> org.apache.hadoop.yarn.util.RMHAUtils.findActiveRMHAId(RMHAUtils.java:44)
> at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:174)
> at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:138)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1243)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> at 
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
> at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> at org.mortbay.jetty.Server.handle(Server.java:326)
> at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
> at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
> at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
> at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
> at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
> at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> {code}
> This only can happen on RM have multiple IPs, related code is inside 
> AmIpFilter.java doFilter function:
> {code}
> if (!getProxyAddresses().contains(httpReq.getRemoteAddr())) {
>   String redirectUrl = findRedirectUrl();
>   

[jira] [Commented] (YARN-6990) AmIpFilter:findRedirectUrl use HAServiceProtocol to getServiceStatus

2017-08-10 Thread yunjiong zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16122538#comment-16122538
 ] 

yunjiong zhao commented on YARN-6990:
-

I just find YARN-6625 fixed the issue.
We use 2.7.

> AmIpFilter:findRedirectUrl use HAServiceProtocol to getServiceStatus
> 
>
> Key: YARN-6990
> URL: https://issues.apache.org/jira/browse/YARN-6990
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
>
> Due to we have multiple IP in ResourceManager, when try to access proxy URL 
> like https://*:50030/proxy/application_1502349494018_10877/, it will failed 
> due to it use HAServiceProtocol to find out which one is active RM.
> {code}
> 2017-08-10 10:51:42,344 WARN [971256592@qtp-666312528-0] 
> org.apache.hadoop.ipc.Client: Exception encountered while connecting to the 
> server :
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
> via:[KERBEROS]
> at 
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:172)
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:563)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:378)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:732)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:728)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:727)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:378)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1492)
> at org.apache.hadoop.ipc.Client.call(Client.java:1402)
> at org.apache.hadoop.ipc.Client.call(Client.java:1363)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy84.getServiceStatus(Unknown Source)
> at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolClientSideTranslatorPB.getServiceStatus(HAServiceProtocolClientSideTranslatorPB.java:122)
> at org.apache.hadoop.yarn.util.RMHAUtils.getHAState(RMHAUtils.java:68)
> at 
> org.apache.hadoop.yarn.util.RMHAUtils.findActiveRMHAId(RMHAUtils.java:44)
> at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:174)
> at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:138)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1243)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> at 
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
> at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> at org.mortbay.jetty.Server.handle(Server.java:326)
> at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
> at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
> at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
> at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
> at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
> at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> {code}
> This only can happen on RM have multiple IPs, related code is inside 
> AmIpFilter.java doFilter function:
> {code}
> if 

[jira] [Resolved] (YARN-6990) AmIpFilter:findRedirectUrl use HAServiceProtocol to getServiceStatus

2017-08-10 Thread yunjiong zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yunjiong zhao resolved YARN-6990.
-
Resolution: Duplicate
  Assignee: yunjiong zhao

> AmIpFilter:findRedirectUrl use HAServiceProtocol to getServiceStatus
> 
>
> Key: YARN-6990
> URL: https://issues.apache.org/jira/browse/YARN-6990
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
>
> Due to we have multiple IP in ResourceManager, when try to access proxy URL 
> like https://*:50030/proxy/application_1502349494018_10877/, it will failed 
> due to it use HAServiceProtocol to find out which one is active RM.
> {code}
> 2017-08-10 10:51:42,344 WARN [971256592@qtp-666312528-0] 
> org.apache.hadoop.ipc.Client: Exception encountered while connecting to the 
> server :
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
> via:[KERBEROS]
> at 
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:172)
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:563)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:378)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:732)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:728)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:727)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:378)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1492)
> at org.apache.hadoop.ipc.Client.call(Client.java:1402)
> at org.apache.hadoop.ipc.Client.call(Client.java:1363)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy84.getServiceStatus(Unknown Source)
> at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolClientSideTranslatorPB.getServiceStatus(HAServiceProtocolClientSideTranslatorPB.java:122)
> at org.apache.hadoop.yarn.util.RMHAUtils.getHAState(RMHAUtils.java:68)
> at 
> org.apache.hadoop.yarn.util.RMHAUtils.findActiveRMHAId(RMHAUtils.java:44)
> at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:174)
> at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:138)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1243)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> at 
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
> at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> at org.mortbay.jetty.Server.handle(Server.java:326)
> at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
> at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
> at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
> at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
> at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
> at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> {code}
> This only can happen on RM have multiple IPs, related code is inside 
> AmIpFilter.java doFilter function:
> {code}
> if 

  1   2   >