[jira] [Commented] (YARN-8980) Mapreduce application container start fail after AM restart.

2018-11-10 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682770#comment-16682770
 ] 

Bibin A Chundatt commented on YARN-8980:


[~botong]/[~subru]

Issue is not completely related to YARN-8898 discussion but one of the solution 
depends on that (Solution 2)

AMProxy HA works by registering UAM with same application attempt ID . 
ApplicationMasterService#registerApplicationMaster
{code:java}
if (!(appContext.getUnmanagedAM()
&& appContext.getKeepContainersAcrossApplicationAttempts())) {
{code}
Solutions
 # DefaultAMSProcessor#registerApplicationMaster clear NMsecretManager after 
previous attempt containers are set.This will make sure allocated containers 
get NMTokens again for same hostname.
{code:java}
ApplicationSubmissionContext applicationSubmissionContext =
app.getApplicationSubmissionContext();
if (applicationSubmissionContext.getUnmanagedAM()
&& applicationSubmissionContext
.getKeepContainersAcrossApplicationAttempts()) {
  rmContext.getNMTokenSecretManager()
  .clearNodeSetForAttempt(applicationAttemptId);
}
response.setSchedulerResourceTypes(
getScheduler().getSchedulingResourceTypes());
{code}
 # Handle at FederationInterceptor to add token received in recovery to first 
allocate response .

> Mapreduce application container start  fail after AM restart.
> -
>
> Key: YARN-8980
> URL: https://issues.apache.org/jira/browse/YARN-8980
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Priority: Major
>
> UAM to subclusters are always launched with keepContainers.
> On AM restart scenarios , UAM register again with RM . UAM receive running 
> containers with NMToken. NMToken received by UAM in 
> getPreviousAttemptContainersNMToken is never used by mapreduce application.  
> Federation Interceptor should take care of such scenarios too. Merge NMToken 
> received at registration to allocate response.
> Container allocation response on same node will have NMToken empty.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8933) [AMRMProxy] Fix potential empty fields in allocation response, move SubClusterTimeout to FederationInterceptor

2018-11-10 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682747#comment-16682747
 ] 

Bibin A Chundatt commented on YARN-8933:


Thank you [~botong] for explanation.

+1 for the latest patch. 

Could you recheck/retrigger the jenkins.

> [AMRMProxy] Fix potential empty fields in allocation response, move 
> SubClusterTimeout to FederationInterceptor
> --
>
> Key: YARN-8933
> URL: https://issues.apache.org/jira/browse/YARN-8933
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: amrmproxy, federation
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-8933.v1.patch, YARN-8933.v2.patch, 
> YARN-8933.v3.patch
>
>
> After YARN-8696, the allocate response by FederationInterceptor is merged 
> from the responses from a random subset of all sub-clusters, depending on the 
> async heartbeat timing. As a result, cluster-wide information fields in the 
> response, e.g. AvailableResources and NumClusterNodes, are not consistent at 
> all. It can even be null/zero because the specific response is merged from an 
> empty set of sub-cluster responses. 
> In this patch, we let FederationInterceptor remember the last allocate 
> response from all known sub-clusters, and always construct the cluster-wide 
> info fields from all of them. We also moved sub-cluster timeout from 
> LocalityMulticastAMRMProxyPolicy to FederationInterceptor, so that 
> sub-clusters that expired (haven't had a successful allocate response for a 
> while) won't be included in the computation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8833) compute shares may lock the scheduling process

2018-11-10 Thread liyakun (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682714#comment-16682714
 ] 

liyakun commented on YARN-8833:
---

Thanks you both. 

This is a single cluster based on hadoop 2.6.0, and lots of improvements have 
been done.

I will submit this patch as soon as I can.

> compute shares may  lock the scheduling process
> ---
>
> Key: YARN-8833
> URL: https://issues.apache.org/jira/browse/YARN-8833
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: liyakun
>Assignee: liyakun
>Priority: Major
>
> When use w2rRatio compute fair share, there may be a chance triggering the 
> problem of Int overflow, and entering an infinite loop.
> Since the compute share thread holds the writeLock, it may blocking 
> scheduling thread.
> This issue occurs in a production environment with 8500 nodes. And we have 
> already fixed it.
>  
> added 2018-10-29: elaborate the problem 
> /**
>  * Compute the resources that would be used given a weight-to-resource ratio
>  * w2rRatio, for use in the computeFairShares algorithm as described in #
>  */
>  private static int resourceUsedWithWeightToResourceRatio(double w2rRatio,
>  Collection schedulables, String type) {
>  int resourcesTaken = 0;
>  for (Schedulable sched : schedulables) \{ int share = computeShare(sched, 
> w2rRatio, type); resourcesTaken += share; }
> return resourcesTaken;
>  }
> The variable resourcesTaken is an integer type. And it also is accumulated 
> value of result of
> computeShare(Schedulable sched, double w2rRatio,String type) which is a value 
> between the min share and max share of a queue.
> For example, when there are 3 queues, each has min share = max share = 
> Integer.MAX_VALUE, the resourcesTaken will be out of Integer bound, and it 
> will be a negative number.
> when resourceUsedWithWeightToResourceRatio(double w2rRatio, Collection extends Schedulable> schedulables, String type) return a negative number, the 
> loop in 
> computeSharesInternal() may never out which got the scheduler lock.
>  
> //org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares
> while (resourceUsedWithWeightToResourceRatio(rMax, schedulables, type)
>  < totalResource){
> rMax *= 2.0;
> }
> This may blocking scheduling thread.
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9008) Extend YARN distributed shell with file localization feature

2018-11-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682579#comment-16682579
 ] 

Hadoop QA commented on YARN-9008:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell:
 The patch generated 1 new + 135 unchanged - 0 fixed = 136 total (was 135) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
33s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9008 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947718/YARN-9008-002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 37cc5b1d2b46 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2664248 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22501/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
 |
|  Test Results | 

[jira] [Updated] (YARN-9008) Extend YARN distributed shell with file localization feature

2018-11-10 Thread Peter Bacsko (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-9008:
---
Attachment: YARN-9008-002.patch

> Extend YARN distributed shell with file localization feature
> 
>
> Key: YARN-9008
> URL: https://issues.apache.org/jira/browse/YARN-9008
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9008-001.patch, YARN-9008-002.patch
>
>
> YARN distributed shell is a very handy tool to test various features of YARN.
> However, it lacks support for file localization - that is, you define files 
> in the command line that you wish to be localized remotely. This can be 
> extremely useful in certain scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9008) Extend YARN distributed shell with file localization feature

2018-11-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682543#comment-16682543
 ] 

Hadoop QA commented on YARN-9008:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell:
 The patch generated 1 new + 135 unchanged - 0 fixed = 136 total (was 135) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
57s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9008 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947714/YARN-9008-001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 992bf2e0d9c2 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2664248 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22500/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
 |
|  Test Results | 

[jira] [Updated] (YARN-9008) Extend YARN distributed shell with file localization feature

2018-11-10 Thread Peter Bacsko (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-9008:
---
Description: 
YARN distributed shell is a very handy tool to test various features of YARN.

However, it lacks support for file localization - that is, you define files in 
the command line that you wish to be localized remotely. This can be extremely 
useful in certain scenarios.

  was:
YARN distributed shell is a very handy tool to test various features of YARN.

However, it lacks support for file localization - that is, you define files in 
the command like that you wish to be localized remotely. This can be extremely 
useful in certain scenarios.


> Extend YARN distributed shell with file localization feature
> 
>
> Key: YARN-9008
> URL: https://issues.apache.org/jira/browse/YARN-9008
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9008-001.patch
>
>
> YARN distributed shell is a very handy tool to test various features of YARN.
> However, it lacks support for file localization - that is, you define files 
> in the command line that you wish to be localized remotely. This can be 
> extremely useful in certain scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9008) Extend YARN distributed shell with file localization feature

2018-11-10 Thread Peter Bacsko (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-9008:
---
Attachment: YARN-9008-001.patch

> Extend YARN distributed shell with file localization feature
> 
>
> Key: YARN-9008
> URL: https://issues.apache.org/jira/browse/YARN-9008
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9008-001.patch
>
>
> YARN distributed shell is a very handy tool to test various features of YARN.
> However, it lacks support for file localization - that is, you define files 
> in the command like that you wish to be localized remotely. This can be 
> extremely useful in certain scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9008) Extend YARN distributed shell with file localization feature

2018-11-10 Thread Peter Bacsko (JIRA)
Peter Bacsko created YARN-9008:
--

 Summary: Extend YARN distributed shell with file localization 
feature
 Key: YARN-9008
 URL: https://issues.apache.org/jira/browse/YARN-9008
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: yarn
Affects Versions: 3.1.1, 2.9.1
Reporter: Peter Bacsko
Assignee: Peter Bacsko


YARN distributed shell is a very handy tool to test various features of YARN.

However, it lacks support for file localization - that is, you define files in 
the command like that you wish to be localized remotely. This can be extremely 
useful in certain scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8980) Mapreduce application container start fail after AM restart.

2018-11-10 Thread Botong Huang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682485#comment-16682485
 ] 

Botong Huang commented on YARN-8980:


Thanks [~bibinchundatt] for reporting. This is along the discussion we are 
having in YARN-8898. Basically it is better to use the original 
_ApplicationSubmissionContext_ for the app when launching the UAMs. We will 
probably need to go with Solution 2 discussed there: Push 
applicationSubmissionContext also to federationStore at router side. [~subru] 
what do you think? 

> Mapreduce application container start  fail after AM restart.
> -
>
> Key: YARN-8980
> URL: https://issues.apache.org/jira/browse/YARN-8980
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Priority: Major
>
> UAM to subclusters are always launched with keepContainers.
> On AM restart scenarios , UAM register again with RM . UAM receive running 
> containers with NMToken. NMToken received by UAM in 
> getPreviousAttemptContainersNMToken is never used by mapreduce application.  
> Federation Interceptor should take care of such scenarios too. Merge NMToken 
> received at registration to allocate response.
> Container allocation response on same node will have NMToken empty.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9002) YARN Service keytab does not support s3, wasb, gs and is restricted to HDFS and local filesystem only

2018-11-10 Thread Gour Saha (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682434#comment-16682434
 ] 

Gour Saha commented on YARN-9002:
-

Thanks a lot [~eyang]

> YARN Service keytab does not support s3, wasb, gs and is restricted to HDFS 
> and local filesystem only
> -
>
> Key: YARN-9002
> URL: https://issues.apache.org/jira/browse/YARN-9002
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.1
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Fix For: 3.1.2, 3.3.0, 3.2.1
>
> Attachments: YARN-9002-branch-3.1.001.patch, 
> YARN-9002-branch-3.1.002.patch, YARN-9002.001.patch
>
>
> ServiceClient.java specifically checks if the keytab URI scheme is hdfs or 
> file. This restricts it from supporting other FileSystem API conforming FSs 
> like s3a, wasb, gs, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9002) YARN Service keytab does not support s3, wasb, gs and is restricted to HDFS and local filesystem only

2018-11-10 Thread Gour Saha (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-9002:

Summary: YARN Service keytab does not support s3, wasb, gs and is 
restricted to HDFS and local filesystem only  (was: YARN Service keytab 
location is restricted to HDFS and local filesystem only)

> YARN Service keytab does not support s3, wasb, gs and is restricted to HDFS 
> and local filesystem only
> -
>
> Key: YARN-9002
> URL: https://issues.apache.org/jira/browse/YARN-9002
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Affects Versions: 3.1.1
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Fix For: 3.1.2, 3.3.0, 3.2.1
>
> Attachments: YARN-9002-branch-3.1.001.patch, 
> YARN-9002-branch-3.1.002.patch, YARN-9002.001.patch
>
>
> ServiceClient.java specifically checks if the keytab URI scheme is hdfs or 
> file. This restricts it from supporting other FileSystem API conforming FSs 
> like s3a, wasb, gs, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8960) [Submarine] Can't get submarine service status using the command of "yarn app -status" under security environment

2018-11-10 Thread Zac Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682419#comment-16682419
 ] 

Zac Zhou commented on YARN-8960:


Maybe, we need both of them,

To enable "yarn app -status", submarine service should have service principal.

And If we want to make it convenient for a notebook app, like zeppline, to 
submit submarine apps for different user like spark, we need user principal 
parameters to specify who submit the job.

Or we can just have one principal parameter, and use it as both service 
principal and user principal?

[~leftnoteasy], [~sunilg] any comments?

 

> [Submarine] Can't get submarine service status using the command of "yarn app 
> -status" under security environment
> -
>
> Key: YARN-8960
> URL: https://issues.apache.org/jira/browse/YARN-8960
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zac Zhou
>Assignee: Zac Zhou
>Priority: Major
> Attachments: YARN-8960.001.patch, YARN-8960.002.patch, 
> YARN-8960.003.patch
>
>
> After submitting a submarine job, we tried to get service status using the 
> following command:
> yarn app -status ${service_name}
> But we got the following error:
> HTTP error code : 500
>  
> The stack in resourcemanager log is :
> ERROR org.apache.hadoop.yarn.service.webapp.ApiServer: Get service failed: {}
> java.lang.reflect.UndeclaredThrowableException
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1748)
>  at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.getServiceFromClient(ApiServer.java:800)
>  at 
> org.apache.hadoop.yarn.service.webapp.ApiServer.getService(ApiServer.java:186)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
>  at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker
> ._dispatch(AbstractResourceMethodDispatchProvider.java:205)
>  at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodD
> ispatcher.java:75)
>  at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
>  at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>  at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
>  at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>  at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
>  at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
>  at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
>  at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
>  at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
>  at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
>  at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
>  at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>  at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
>  at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:89)
>  at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:941)
>  at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:875)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebAppFilter.doFilter(RMWebAppFilter.java:179)
>  at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:829)
>  at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
>  at 
> com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:119)
>  at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:133)
>  at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:130)
>  at 

[jira] [Commented] (YARN-9007) CS preemption monitor should only select GUARANTEED containers as candidates

2018-11-10 Thread Tao Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682402#comment-16682402
 ] 

Tao Yang commented on YARN-9007:


Attached v1 patch. 
[~leftnoteasy], [~sunilg], could you help to review at your convenience? Thanks.

> CS preemption monitor should only select GUARANTEED containers as candidates
> 
>
> Key: YARN-9007
> URL: https://issues.apache.org/jira/browse/YARN-9007
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.1
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Attachments: YARN-9007.001.patch
>
>
> Currently CS preemption monitor doesn't consider execution type of 
> containers, so OPPORTUNISTIC containers maybe selected and killed without 
> effect.
> In some scenario with OPPORTUNISTIC containers, not even preemption can't 
> work properly to balance resources, but also some apps with OPPORTUNISTIC 
> containers maybe effected and unable to work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8925) Updating distributed node attributes only when necessary

2018-11-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682382#comment-16682382
 ] 

Hadoop QA commented on YARN-8925:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 19s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 6 new + 343 unchanged - 0 fixed = 349 total (was 343) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
49s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
36s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
42s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
35s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 51s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}229m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| 

[jira] [Commented] (YARN-8233) NPE in CapacityScheduler#tryCommit when handling allocate/reserve proposal whose allocatedOrReservedContainer is null

2018-11-10 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682380#comment-16682380
 ] 

Akira Ajisaka commented on YARN-8233:
-

The javac warning is trivial because it is in test code. Committing this. We 
can fix it in a separate jira.

> NPE in CapacityScheduler#tryCommit when handling allocate/reserve proposal 
> whose allocatedOrReservedContainer is null
> -
>
> Key: YARN-8233
> URL: https://issues.apache.org/jira/browse/YARN-8233
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.0
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: YARN-8233.001-branch-3.1-test.patch, 
> YARN-8233.001-test-branch-3.1.patch, YARN-8233.001.branch-2.patch, 
> YARN-8233.001.branch-2.patch, YARN-8233.001.branch-3.0.patch, 
> YARN-8233.001.branch-3.0.patch, YARN-8233.001.branch-3.1.patch, 
> YARN-8233.001.branch-3.1.patch, YARN-8233.001.patch, YARN-8233.002.patch, 
> YARN-8233.003.patch
>
>
> Recently we saw a NPE problem in CapacityScheduler#tryCommit when try to find 
> the attemptId by calling {{c.getAllocatedOrReservedContainer().get...}} from 
> an allocate/reserve proposal. But got null allocatedOrReservedContainer and 
> thrown NPE.
> Reference code:
> {code:java}
> // find the application to accept and apply the ResourceCommitRequest
> if (request.anythingAllocatedOrReserved()) {
>   ContainerAllocationProposal c =
>   request.getFirstAllocatedOrReservedContainer();
>   attemptId =
>   c.getAllocatedOrReservedContainer().getSchedulerApplicationAttempt()
>   .getApplicationAttemptId();   //NPE happens here
> } else { ...
> {code}
> The proposal was constructed in 
> {{CapacityScheduler#createResourceCommitRequest}} and 
> allocatedOrReservedContainer is possibly null in async-scheduling process 
> when node was lost or application was finished (details in 
> {{CapacityScheduler#getSchedulerContainer}}).
> Reference code:
> {code:java}
>   // Allocated something
>   List allocations =
>   csAssignment.getAssignmentInformation().getAllocationDetails();
>   if (!allocations.isEmpty()) {
> RMContainer rmContainer = allocations.get(0).rmContainer;
> allocated = new ContainerAllocationProposal<>(
> getSchedulerContainer(rmContainer, true),   //possibly null
> getSchedulerContainersToRelease(csAssignment),
> 
> getSchedulerContainer(csAssignment.getFulfilledReservedContainer(),
> false), csAssignment.getType(),
> csAssignment.getRequestLocalityType(),
> csAssignment.getSchedulingMode() != null ?
> csAssignment.getSchedulingMode() :
> SchedulingMode.RESPECT_PARTITION_EXCLUSIVITY,
> csAssignment.getResource());
>   }
> {code}
> I think we should add null check for allocateOrReserveContainer before create 
> allocate/reserve proposals. Besides the allocation process has increase 
> unconfirmed resource of app when creating an allocate assignment, so if this 
> check is null, we should decrease the unconfirmed resource of live app.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9007) CS preemption monitor should only select GUARANTEED containers as candidates

2018-11-10 Thread Tao Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-9007:
---
Attachment: YARN-9007.001.patch

> CS preemption monitor should only select GUARANTEED containers as candidates
> 
>
> Key: YARN-9007
> URL: https://issues.apache.org/jira/browse/YARN-9007
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.1
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Attachments: YARN-9007.001.patch
>
>
> Currently CS preemption monitor doesn't consider execution type of 
> containers, so OPPORTUNISTIC containers maybe selected and killed without 
> effect.
> In some scenario with OPPORTUNISTIC containers, not even preemption can't 
> work properly to balance resources, but also some apps with OPPORTUNISTIC 
> containers maybe effected and unable to work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8586) Extract log aggregation related fields and methods from RMAppImpl

2018-11-10 Thread JIRA


[ 
https://issues.apache.org/jira/browse/YARN-8586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682364#comment-16682364
 ] 

Antal Bálint Steinbach commented on YARN-8586:
--

Hi [~snemeth] ,

Thanks for the patch. The extraction makes sense.

I have 1 suggestion:

 
{code:java}
if (!app.logAggregation
.hasReportForNodeManager(nodeAddedEvent.getNodeId())) {
  app.logAggregation.addReportForNodeManager(nodeAddedEvent.getNodeId(),
  LogAggregationReport.newInstance(app.applicationId,
  app.logAggregation.isEnabled() ?
  LogAggregationStatus.NOT_START
  : LogAggregationStatus.DISABLED, ""));
}
{code}
Can be moved to the extracted class also like:

 
{code:java}
app.logAggregation.addReportIfNecessary(nodeAddedEvent.getNodeId(), 
app.applicationId);

public void addReportIfNecessary(NodeId nodeId, ApplicationId applicationId) {
  if (!hasReportForNodeManager(nodeId)) {
LogAggregationStatus status = isEnabled() ? LogAggregationStatus.NOT_START
: LogAggregationStatus.DISABLED;
addReportForNodeManager(nodeId,
LogAggregationReport.newInstance(applicationId, status, ""));
  }
}
{code}
 

> Extract log aggregation related fields and methods from RMAppImpl
> -
>
> Key: YARN-8586
> URL: https://issues.apache.org/jira/browse/YARN-8586
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-8586.001.patch
>
>
> Given that RMAppImpl is already above 2000 lines and it is very complex, as a 
> very simple 
> and straightforward step, all Log aggregation related fields and methods 
> could be extracted to a new class.
> The clients of RMAppImpl may access the same methods and RMAppImpl would 
> delegate all those calls to the newly introduced class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9007) CS preemption monitor should only select GUARANTEED containers as candidates

2018-11-10 Thread Tao Yang (JIRA)
Tao Yang created YARN-9007:
--

 Summary: CS preemption monitor should only select GUARANTEED 
containers as candidates
 Key: YARN-9007
 URL: https://issues.apache.org/jira/browse/YARN-9007
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler
Affects Versions: 3.2.1
Reporter: Tao Yang
Assignee: Tao Yang


Currently CS preemption monitor doesn't consider execution type of containers, 
so OPPORTUNISTIC containers maybe selected and killed without effect.
In some scenario with OPPORTUNISTIC containers, not even preemption can't work 
properly to balance resources, but also some apps with OPPORTUNISTIC containers 
maybe effected and unable to work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8505) AMLimit and userAMLimit check should be skipped for unmanaged AM

2018-11-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682360#comment-16682360
 ] 

Hadoop QA commented on YARN-8505:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 23 new + 683 unchanged - 2 fixed = 706 total (was 685) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}119m 40s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}176m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-8505 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947688/YARN-8505.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b34ae38af760 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2664248 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22496/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/22496/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 

[jira] [Commented] (YARN-5106) Provide a builder interface for FairScheduler allocations for use in tests

2018-11-10 Thread JIRA


[ 
https://issues.apache.org/jira/browse/YARN-5106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682352#comment-16682352
 ] 

Antal Bálint Steinbach commented on YARN-5106:
--

Hi [~snemeth] ,

Thanks for the patch, it cleans up the code a lot.

I would suggest some minor things:
 # In AllocationFileQueue _renderInternal_ is not necessary
 # Not strongly related to your fix but in some tc 
(TestAllocationFileLoaderService) you added a deletion for the alloc file, 
still, some test cases do not delete them. I did not check all but for example 
ReservationACLsTestBase. There can be others.

> Provide a builder interface for FairScheduler allocations for use in tests
> --
>
> Key: YARN-5106
> URL: https://issues.apache.org/jira/browse/YARN-5106
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Szilard Nemeth
>Priority: Major
>  Labels: newbie++
> Attachments: YARN-5106.001.patch, YARN-5106.002.patch
>
>
> Most, if not all, fair scheduler tests create an allocations XML file. Having 
> a helper class that potentially uses a builder would make the tests cleaner. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8233) NPE in CapacityScheduler#tryCommit when handling allocate/reserve proposal whose allocatedOrReservedContainer is null

2018-11-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682345#comment-16682345
 ] 

Hadoop QA commented on YARN-8233:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m  
1s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
17s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 43s{color} 
| {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 13 unchanged - 1 fixed = 14 total (was 14) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 82 unchanged - 0 fixed = 83 total (was 82) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 63m 
46s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:a716388 |
| JIRA Issue | YARN-8233 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947689/YARN-8233.001.branch-2.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 56a4f718cb6c 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / 5e433e5 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_181 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/22497/artifact/out/diff-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22497/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22497/testReport/ |
| Max. process+thread count | 846 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| 

[jira] [Commented] (YARN-9001) [Submarine] Use AppAdminClient instead of ServiceClient to sumbit jobs

2018-11-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682341#comment-16682341
 ] 

Hadoop QA commented on YARN-9001:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications: The patch generated 2 
new + 45 unchanged - 1 fixed = 47 total (was 46) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
41s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
25s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-submarine in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 |
|  |  Possible doublecheck on 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobMonitor.serviceClient
 in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobMonitor.getTrainingJobStatus(String)
  At 
YarnServiceJobMonitor.java:org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobMonitor.getTrainingJobStatus(String)
  At YarnServiceJobMonitor.java:[lines 38-40] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce 

[jira] [Commented] (YARN-7461) DominantResourceCalculator#ratio calculation problem when right resource contains zero value

2018-11-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682340#comment-16682340
 ] 

Hadoop QA commented on YARN-7461:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} YARN-7461 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7461 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914845/YARN-7461.004.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22499/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> DominantResourceCalculator#ratio calculation problem when right resource 
> contains zero value
> 
>
> Key: YARN-7461
> URL: https://issues.apache.org/jira/browse/YARN-7461
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Minor
> Attachments: YARN-7461.001.patch, YARN-7461.002.patch, 
> YARN-7461.003.patch, YARN-7461.004.patch
>
>
> Currently DominantResourceCalculator#ratio may return wrong result when right 
> resource contains zero value. For example, there are three resource types 
> such as , leftResource=<5, 5, 0> and 
> rightResource=<10, 10, 0>, we expect the result of 
> DominantResourceCalculator#ratio(leftResource, rightResource) is 0.5 but 
> currently is NaN.
> There should be a verification before divide calculation to ensure that 
> dividend is not zero.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8987) Usability improvements node-attributes CLI

2018-11-10 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682334#comment-16682334
 ] 

Bibin A Chundatt commented on YARN-8987:


[~cheersyang] Seems some issue with jenkinss

> Usability improvements node-attributes CLI
> --
>
> Key: YARN-8987
> URL: https://issues.apache.org/jira/browse/YARN-8987
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Priority: Critical
> Attachments: YARN-8987.001.patch, YARN-8987.002.patch, 
> YARN-8987.003.patch
>
>
> I setup a single node cluster, then trying to add node-attributes with CLI,
> first I tried:
> {code:java}
> ./bin/yarn nodeattributes -add localhost:hostname(STRING)=localhost
> {code}
> this command returns exit code 0, however the node-attribute was not added.
> Then I tried to replace "localhost" with the host ID, and it worked.
> We need to ensure the command fails with proper error message when adding was 
> not succeed.
> Similarly, when I remove a node-attribute that doesn't exist, I still get 
> return code 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7461) DominantResourceCalculator#ratio calculation problem when right resource contains zero value

2018-11-10 Thread Tao Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682339#comment-16682339
 ] 

Tao Yang commented on YARN-7461:


Hi, [~sunilg], [~leftnoteasy], [~templedf], [~cheersyang].
Can we keep talking about this issue? This issue is important for us to avoid 
errors when clients parse scheduler REST API and the response body contains NaN.

> DominantResourceCalculator#ratio calculation problem when right resource 
> contains zero value
> 
>
> Key: YARN-7461
> URL: https://issues.apache.org/jira/browse/YARN-7461
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Minor
> Attachments: YARN-7461.001.patch, YARN-7461.002.patch, 
> YARN-7461.003.patch, YARN-7461.004.patch
>
>
> Currently DominantResourceCalculator#ratio may return wrong result when right 
> resource contains zero value. For example, there are three resource types 
> such as , leftResource=<5, 5, 0> and 
> rightResource=<10, 10, 0>, we expect the result of 
> DominantResourceCalculator#ratio(leftResource, rightResource) is 0.5 but 
> currently is NaN.
> There should be a verification before divide calculation to ensure that 
> dividend is not zero.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-8987) Usability improvements node-attributes CLI

2018-11-10 Thread Bibin A Chundatt (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-8987:
---
Comment: was deleted

(was: [~cheersyang] Seems some issue with jenkinss)

> Usability improvements node-attributes CLI
> --
>
> Key: YARN-8987
> URL: https://issues.apache.org/jira/browse/YARN-8987
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Priority: Critical
> Attachments: YARN-8987.001.patch, YARN-8987.002.patch, 
> YARN-8987.003.patch
>
>
> I setup a single node cluster, then trying to add node-attributes with CLI,
> first I tried:
> {code:java}
> ./bin/yarn nodeattributes -add localhost:hostname(STRING)=localhost
> {code}
> this command returns exit code 0, however the node-attribute was not added.
> Then I tried to replace "localhost" with the host ID, and it worked.
> We need to ensure the command fails with proper error message when adding was 
> not succeed.
> Similarly, when I remove a node-attribute that doesn't exist, I still get 
> return code 0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8987) Usability improvements node-attributes CLI

2018-11-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682332#comment-16682332
 ] 

Hadoop QA commented on YARN-8987:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 25s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 58 unchanged - 0 fixed = 59 total (was 58) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}113m 56s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 25m 
24s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}219m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-8987 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947679/YARN-8987.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a71b153687aa 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2664248 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (YARN-9006) TestNodeLabelContainerAllocation fails in branch-3.0

2018-11-10 Thread Tao Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682327#comment-16682327
 ] 

Tao Yang commented on YARN-9006:


These failures are caused by the inconsistent default value of 
allocationRequestId in yarn_protos.proto and MockAM#allocate: 
allocationRequestId field of ResourceRequestProto in yarn_protos.proto defined 
as: optional int64 allocation_request_id = 8 [default = 0], its default value 
is 0. But set default value to 0 in MockAM#allocate.
In trunk, default value of allocationRequestId in yarn_protos.proto is -1 
instead of 0, seems it is updated to 0 in YARN-4888, I'm not sure why this 
issue is not applied to trunk.

> TestNodeLabelContainerAllocation fails in branch-3.0
> 
>
> Key: YARN-9006
> URL: https://issues.apache.org/jira/browse/YARN-9006
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [ERROR] Failures: 
> [ERROR]   
> TestNodeLabelContainerAllocation.testPreferenceOfNeedyPrioritiesUnderSameAppTowardsNodePartitions:848->checkPendingResource:619
>  expected:<1024> but was:<0>
> [ERROR]   
> TestNodeLabelContainerAllocation.testPreferenceOfQueuesTowardsNodePartitions:1047->checkPendingResource:619
>  expected:<5120> but was:<0>
> [ERROR]   TestNodeLabelContainerAllocation.testQueueMetricsWithLabels:2024 
> expected:<0> but was:<1024>
> [ERROR]   
> TestNodeLabelContainerAllocation.testQueueMetricsWithLabelsOnDefaultLabelNode:2127
>  expected:<1024> but was:<2048>
> [INFO] 
> [ERROR] Tests run: 21, Failures: 4, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8987) Usability improvements node-attributes CLI

2018-11-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682321#comment-16682321
 ] 

Hadoop QA commented on YARN-8987:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 17m 
27s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 22s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 58 unchanged - 0 fixed = 60 total (was 58) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 45s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 32m 30s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}208m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueManagementDynamicEditPolicy
 |
|   | hadoop.yarn.client.TestApplicationClientProtocolOnHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-8987 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947678/YARN-8987.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7f59cdeae299 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh 

[jira] [Commented] (YARN-9005) FairScheduler maybe preempt the AM container

2018-11-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682309#comment-16682309
 ] 

Hadoop QA commented on YARN-9005:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 16s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}155m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9005 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947686/YARN-9005.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f0022a522d66 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2664248 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/22494/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22494/testReport/ |
| Max. process+thread count | 970 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-9005) FairScheduler maybe preempt the AM container

2018-11-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682305#comment-16682305
 ] 

Hadoop QA commented on YARN-9005:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}163m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9005 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947682/YARN-9005.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3edd8d45d466 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2664248 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/22493/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22493/testReport/ |
| Max. process+thread count | 924 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Updated] (YARN-9001) [Submarine] Use AppAdminClient instead of ServiceClient to sumbit jobs

2018-11-10 Thread Zac Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zac Zhou updated YARN-9001:
---
Attachment: YARN-9001.002.patch

> [Submarine] Use AppAdminClient instead of ServiceClient to sumbit jobs
> --
>
> Key: YARN-9001
> URL: https://issues.apache.org/jira/browse/YARN-9001
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zac Zhou
>Assignee: Zac Zhou
>Priority: Major
> Attachments: YARN-9001.001.patch, YARN-9001.002.patch
>
>
> For now, submarine submit a service to yarn by using ServiceClient, We should 
> change it to AppAdminClient 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8233) NPE in CapacityScheduler#tryCommit when handling allocate/reserve proposal whose allocatedOrReservedContainer is null

2018-11-10 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682283#comment-16682283
 ] 

Akira Ajisaka commented on YARN-8233:
-

Resubmitting the branch-2 patch to run precommit job for branch-2.

> NPE in CapacityScheduler#tryCommit when handling allocate/reserve proposal 
> whose allocatedOrReservedContainer is null
> -
>
> Key: YARN-8233
> URL: https://issues.apache.org/jira/browse/YARN-8233
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.0
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: YARN-8233.001-branch-3.1-test.patch, 
> YARN-8233.001-test-branch-3.1.patch, YARN-8233.001.branch-2.patch, 
> YARN-8233.001.branch-2.patch, YARN-8233.001.branch-3.0.patch, 
> YARN-8233.001.branch-3.0.patch, YARN-8233.001.branch-3.1.patch, 
> YARN-8233.001.branch-3.1.patch, YARN-8233.001.patch, YARN-8233.002.patch, 
> YARN-8233.003.patch
>
>
> Recently we saw a NPE problem in CapacityScheduler#tryCommit when try to find 
> the attemptId by calling {{c.getAllocatedOrReservedContainer().get...}} from 
> an allocate/reserve proposal. But got null allocatedOrReservedContainer and 
> thrown NPE.
> Reference code:
> {code:java}
> // find the application to accept and apply the ResourceCommitRequest
> if (request.anythingAllocatedOrReserved()) {
>   ContainerAllocationProposal c =
>   request.getFirstAllocatedOrReservedContainer();
>   attemptId =
>   c.getAllocatedOrReservedContainer().getSchedulerApplicationAttempt()
>   .getApplicationAttemptId();   //NPE happens here
> } else { ...
> {code}
> The proposal was constructed in 
> {{CapacityScheduler#createResourceCommitRequest}} and 
> allocatedOrReservedContainer is possibly null in async-scheduling process 
> when node was lost or application was finished (details in 
> {{CapacityScheduler#getSchedulerContainer}}).
> Reference code:
> {code:java}
>   // Allocated something
>   List allocations =
>   csAssignment.getAssignmentInformation().getAllocationDetails();
>   if (!allocations.isEmpty()) {
> RMContainer rmContainer = allocations.get(0).rmContainer;
> allocated = new ContainerAllocationProposal<>(
> getSchedulerContainer(rmContainer, true),   //possibly null
> getSchedulerContainersToRelease(csAssignment),
> 
> getSchedulerContainer(csAssignment.getFulfilledReservedContainer(),
> false), csAssignment.getType(),
> csAssignment.getRequestLocalityType(),
> csAssignment.getSchedulingMode() != null ?
> csAssignment.getSchedulingMode() :
> SchedulingMode.RESPECT_PARTITION_EXCLUSIVITY,
> csAssignment.getResource());
>   }
> {code}
> I think we should add null check for allocateOrReserveContainer before create 
> allocate/reserve proposals. Besides the allocation process has increase 
> unconfirmed resource of app when creating an allocate assignment, so if this 
> check is null, we should decrease the unconfirmed resource of live app.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8233) NPE in CapacityScheduler#tryCommit when handling allocate/reserve proposal whose allocatedOrReservedContainer is null

2018-11-10 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-8233:

Attachment: YARN-8233.001.branch-2.patch

> NPE in CapacityScheduler#tryCommit when handling allocate/reserve proposal 
> whose allocatedOrReservedContainer is null
> -
>
> Key: YARN-8233
> URL: https://issues.apache.org/jira/browse/YARN-8233
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.0
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: YARN-8233.001-branch-3.1-test.patch, 
> YARN-8233.001-test-branch-3.1.patch, YARN-8233.001.branch-2.patch, 
> YARN-8233.001.branch-2.patch, YARN-8233.001.branch-3.0.patch, 
> YARN-8233.001.branch-3.0.patch, YARN-8233.001.branch-3.1.patch, 
> YARN-8233.001.branch-3.1.patch, YARN-8233.001.patch, YARN-8233.002.patch, 
> YARN-8233.003.patch
>
>
> Recently we saw a NPE problem in CapacityScheduler#tryCommit when try to find 
> the attemptId by calling {{c.getAllocatedOrReservedContainer().get...}} from 
> an allocate/reserve proposal. But got null allocatedOrReservedContainer and 
> thrown NPE.
> Reference code:
> {code:java}
> // find the application to accept and apply the ResourceCommitRequest
> if (request.anythingAllocatedOrReserved()) {
>   ContainerAllocationProposal c =
>   request.getFirstAllocatedOrReservedContainer();
>   attemptId =
>   c.getAllocatedOrReservedContainer().getSchedulerApplicationAttempt()
>   .getApplicationAttemptId();   //NPE happens here
> } else { ...
> {code}
> The proposal was constructed in 
> {{CapacityScheduler#createResourceCommitRequest}} and 
> allocatedOrReservedContainer is possibly null in async-scheduling process 
> when node was lost or application was finished (details in 
> {{CapacityScheduler#getSchedulerContainer}}).
> Reference code:
> {code:java}
>   // Allocated something
>   List allocations =
>   csAssignment.getAssignmentInformation().getAllocationDetails();
>   if (!allocations.isEmpty()) {
> RMContainer rmContainer = allocations.get(0).rmContainer;
> allocated = new ContainerAllocationProposal<>(
> getSchedulerContainer(rmContainer, true),   //possibly null
> getSchedulerContainersToRelease(csAssignment),
> 
> getSchedulerContainer(csAssignment.getFulfilledReservedContainer(),
> false), csAssignment.getType(),
> csAssignment.getRequestLocalityType(),
> csAssignment.getSchedulingMode() != null ?
> csAssignment.getSchedulingMode() :
> SchedulingMode.RESPECT_PARTITION_EXCLUSIVITY,
> csAssignment.getResource());
>   }
> {code}
> I think we should add null check for allocateOrReserveContainer before create 
> allocate/reserve proposals. Besides the allocation process has increase 
> unconfirmed resource of app when creating an allocate assignment, so if this 
> check is null, we should decrease the unconfirmed resource of live app.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8233) NPE in CapacityScheduler#tryCommit when handling allocate/reserve proposal whose allocatedOrReservedContainer is null

2018-11-10 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-8233:

Fix Version/s: 3.0.4

Committed this to branch-3.0.

> NPE in CapacityScheduler#tryCommit when handling allocate/reserve proposal 
> whose allocatedOrReservedContainer is null
> -
>
> Key: YARN-8233
> URL: https://issues.apache.org/jira/browse/YARN-8233
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.0
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: YARN-8233.001-branch-3.1-test.patch, 
> YARN-8233.001-test-branch-3.1.patch, YARN-8233.001.branch-2.patch, 
> YARN-8233.001.branch-3.0.patch, YARN-8233.001.branch-3.0.patch, 
> YARN-8233.001.branch-3.1.patch, YARN-8233.001.branch-3.1.patch, 
> YARN-8233.001.patch, YARN-8233.002.patch, YARN-8233.003.patch
>
>
> Recently we saw a NPE problem in CapacityScheduler#tryCommit when try to find 
> the attemptId by calling {{c.getAllocatedOrReservedContainer().get...}} from 
> an allocate/reserve proposal. But got null allocatedOrReservedContainer and 
> thrown NPE.
> Reference code:
> {code:java}
> // find the application to accept and apply the ResourceCommitRequest
> if (request.anythingAllocatedOrReserved()) {
>   ContainerAllocationProposal c =
>   request.getFirstAllocatedOrReservedContainer();
>   attemptId =
>   c.getAllocatedOrReservedContainer().getSchedulerApplicationAttempt()
>   .getApplicationAttemptId();   //NPE happens here
> } else { ...
> {code}
> The proposal was constructed in 
> {{CapacityScheduler#createResourceCommitRequest}} and 
> allocatedOrReservedContainer is possibly null in async-scheduling process 
> when node was lost or application was finished (details in 
> {{CapacityScheduler#getSchedulerContainer}}).
> Reference code:
> {code:java}
>   // Allocated something
>   List allocations =
>   csAssignment.getAssignmentInformation().getAllocationDetails();
>   if (!allocations.isEmpty()) {
> RMContainer rmContainer = allocations.get(0).rmContainer;
> allocated = new ContainerAllocationProposal<>(
> getSchedulerContainer(rmContainer, true),   //possibly null
> getSchedulerContainersToRelease(csAssignment),
> 
> getSchedulerContainer(csAssignment.getFulfilledReservedContainer(),
> false), csAssignment.getType(),
> csAssignment.getRequestLocalityType(),
> csAssignment.getSchedulingMode() != null ?
> csAssignment.getSchedulingMode() :
> SchedulingMode.RESPECT_PARTITION_EXCLUSIVITY,
> csAssignment.getResource());
>   }
> {code}
> I think we should add null check for allocateOrReserveContainer before create 
> allocate/reserve proposals. Besides the allocation process has increase 
> unconfirmed resource of app when creating an allocate assignment, so if this 
> check is null, we should decrease the unconfirmed resource of live app.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8233) NPE in CapacityScheduler#tryCommit when handling allocate/reserve proposal whose allocatedOrReservedContainer is null

2018-11-10 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682280#comment-16682280
 ] 

Akira Ajisaka commented on YARN-8233:
-

The test failure in branch-3.0 is not related to the patch. Filed YARN-9006 to 
track this.

> NPE in CapacityScheduler#tryCommit when handling allocate/reserve proposal 
> whose allocatedOrReservedContainer is null
> -
>
> Key: YARN-8233
> URL: https://issues.apache.org/jira/browse/YARN-8233
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.0
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Fix For: 3.1.2, 3.3.0, 3.2.1
>
> Attachments: YARN-8233.001-branch-3.1-test.patch, 
> YARN-8233.001-test-branch-3.1.patch, YARN-8233.001.branch-2.patch, 
> YARN-8233.001.branch-3.0.patch, YARN-8233.001.branch-3.0.patch, 
> YARN-8233.001.branch-3.1.patch, YARN-8233.001.branch-3.1.patch, 
> YARN-8233.001.patch, YARN-8233.002.patch, YARN-8233.003.patch
>
>
> Recently we saw a NPE problem in CapacityScheduler#tryCommit when try to find 
> the attemptId by calling {{c.getAllocatedOrReservedContainer().get...}} from 
> an allocate/reserve proposal. But got null allocatedOrReservedContainer and 
> thrown NPE.
> Reference code:
> {code:java}
> // find the application to accept and apply the ResourceCommitRequest
> if (request.anythingAllocatedOrReserved()) {
>   ContainerAllocationProposal c =
>   request.getFirstAllocatedOrReservedContainer();
>   attemptId =
>   c.getAllocatedOrReservedContainer().getSchedulerApplicationAttempt()
>   .getApplicationAttemptId();   //NPE happens here
> } else { ...
> {code}
> The proposal was constructed in 
> {{CapacityScheduler#createResourceCommitRequest}} and 
> allocatedOrReservedContainer is possibly null in async-scheduling process 
> when node was lost or application was finished (details in 
> {{CapacityScheduler#getSchedulerContainer}}).
> Reference code:
> {code:java}
>   // Allocated something
>   List allocations =
>   csAssignment.getAssignmentInformation().getAllocationDetails();
>   if (!allocations.isEmpty()) {
> RMContainer rmContainer = allocations.get(0).rmContainer;
> allocated = new ContainerAllocationProposal<>(
> getSchedulerContainer(rmContainer, true),   //possibly null
> getSchedulerContainersToRelease(csAssignment),
> 
> getSchedulerContainer(csAssignment.getFulfilledReservedContainer(),
> false), csAssignment.getType(),
> csAssignment.getRequestLocalityType(),
> csAssignment.getSchedulingMode() != null ?
> csAssignment.getSchedulingMode() :
> SchedulingMode.RESPECT_PARTITION_EXCLUSIVITY,
> csAssignment.getResource());
>   }
> {code}
> I think we should add null check for allocateOrReserveContainer before create 
> allocate/reserve proposals. Besides the allocation process has increase 
> unconfirmed resource of app when creating an allocate assignment, so if this 
> check is null, we should decrease the unconfirmed resource of live app.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8245) Validate the existence of parent queue for add-queue operation in scheduler-conf REST API

2018-11-10 Thread Tao Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682281#comment-16682281
 ] 

Tao Yang commented on YARN-8245:


Hi, [~jhung], could you help to review this patch? Thanks.

> Validate the existence of parent queue for add-queue operation in 
> scheduler-conf REST API
> -
>
> Key: YARN-8245
> URL: https://issues.apache.org/jira/browse/YARN-8245
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Attachments: YARN-8245.001.patch
>
>
> Now there is no existence validation of parent queue for add-queue operation 
> in scheduler-conf REST API, this may create lots of invalid queues 
> successfully without actual validations and can cause potential problems 
> later.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9006) TestNodeLabelContainerAllocation fails in branch-3.0

2018-11-10 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-9006:

Summary: TestNodeLabelContainerAllocation fails in branch-3.0  (was: 
TestNodeLabelContainerAllocation fails on branch-3.0)

> TestNodeLabelContainerAllocation fails in branch-3.0
> 
>
> Key: YARN-9006
> URL: https://issues.apache.org/jira/browse/YARN-9006
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [ERROR] Failures: 
> [ERROR]   
> TestNodeLabelContainerAllocation.testPreferenceOfNeedyPrioritiesUnderSameAppTowardsNodePartitions:848->checkPendingResource:619
>  expected:<1024> but was:<0>
> [ERROR]   
> TestNodeLabelContainerAllocation.testPreferenceOfQueuesTowardsNodePartitions:1047->checkPendingResource:619
>  expected:<5120> but was:<0>
> [ERROR]   TestNodeLabelContainerAllocation.testQueueMetricsWithLabels:2024 
> expected:<0> but was:<1024>
> [ERROR]   
> TestNodeLabelContainerAllocation.testQueueMetricsWithLabelsOnDefaultLabelNode:2127
>  expected:<1024> but was:<2048>
> [INFO] 
> [ERROR] Tests run: 21, Failures: 4, Errors: 0, Skipped: 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9006) TestNodeLabelContainerAllocation fails on branch-3.0

2018-11-10 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created YARN-9006:
---

 Summary: TestNodeLabelContainerAllocation fails on branch-3.0
 Key: YARN-9006
 URL: https://issues.apache.org/jira/browse/YARN-9006
 Project: Hadoop YARN
  Issue Type: Bug
  Components: test
Reporter: Akira Ajisaka


{noformat}
[ERROR] Failures: 
[ERROR]   
TestNodeLabelContainerAllocation.testPreferenceOfNeedyPrioritiesUnderSameAppTowardsNodePartitions:848->checkPendingResource:619
 expected:<1024> but was:<0>
[ERROR]   
TestNodeLabelContainerAllocation.testPreferenceOfQueuesTowardsNodePartitions:1047->checkPendingResource:619
 expected:<5120> but was:<0>
[ERROR]   TestNodeLabelContainerAllocation.testQueueMetricsWithLabels:2024 
expected:<0> but was:<1024>
[ERROR]   
TestNodeLabelContainerAllocation.testQueueMetricsWithLabelsOnDefaultLabelNode:2127
 expected:<1024> but was:<2048>
[INFO] 
[ERROR] Tests run: 21, Failures: 4, Errors: 0, Skipped: 0
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8505) AMLimit and userAMLimit check should be skipped for unmanaged AM

2018-11-10 Thread Tao Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682277#comment-16682277
 ] 

Tao Yang commented on YARN-8505:


There is no jenkins report for v2 patch.  Attached again to trigger it.

> AMLimit and userAMLimit check should be skipped for unmanaged AM
> 
>
> Key: YARN-8505
> URL: https://issues.apache.org/jira/browse/YARN-8505
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.0, 2.9.2
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Attachments: YARN-8505.001.patch, YARN-8505.002.patch, 
> YARN-8505.003.patch
>
>
> AMLimit and userAMLimit check in LeafQueue#activateApplications should be 
> skipped for unmanaged AM whose resource is not taken from YARN cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8505) AMLimit and userAMLimit check should be skipped for unmanaged AM

2018-11-10 Thread Tao Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-8505:
---
Attachment: YARN-8505.003.patch

> AMLimit and userAMLimit check should be skipped for unmanaged AM
> 
>
> Key: YARN-8505
> URL: https://issues.apache.org/jira/browse/YARN-8505
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.0, 2.9.2
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Attachments: YARN-8505.001.patch, YARN-8505.002.patch, 
> YARN-8505.003.patch
>
>
> AMLimit and userAMLimit check in LeafQueue#activateApplications should be 
> skipped for unmanaged AM whose resource is not taken from YARN cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9005) FairScheduler maybe preempt the AM container

2018-11-10 Thread Wanqiang Ji (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682251#comment-16682251
 ] 

Wanqiang Ji edited comment on YARN-9005 at 11/10/18 8:29 AM:
-

Thanks [~yufeigu] provides the original design and discussion.

I updated 002 patch to elevate the performance for identifyContainersToPreempt. 
Please help to review, thx~


was (Author: jiwq):
Thanks [~yufeigu] provides the original design and discussion.

I update 002 patch to elevate the performance for identifyContainersToPreempt. 
Please help to review, thx~

> FairScheduler maybe preempt the AM container
> 
>
> Key: YARN-9005
> URL: https://issues.apache.org/jira/browse/YARN-9005
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, scheduler preemption
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: YARN-9005.001.patch, YARN-9005.002.patch
>
>
> In the worst case, FS preempt the AM container. Due to 
> FSPreemptionThread#identifyContainersToPreempt return value contains AM 
> container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8925) Updating distributed node attributes only when necessary

2018-11-10 Thread Tao Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16682272#comment-16682272
 ] 

Tao Yang commented on YARN-8925:


Attached v5 patch to fix some checkstyle warnings and trigger QA check.
Hi, [~cheersyang]. There are still several warnings about "More than 7 
parameters (found 9). [ParameterNumber]" and "Variable 'xxx' must be private 
and have accessor methods. [VisibilityModifier]" for this patch. I think there 
are acceptable and no need to handle those. Thoughts?

> Updating distributed node attributes only when necessary
> 
>
> Key: YARN-8925
> URL: https://issues.apache.org/jira/browse/YARN-8925
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.2.1
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
>  Labels: performance
> Attachments: YARN-8925.001.patch, YARN-8925.002.patch, 
> YARN-8925.003.patch, YARN-8925.004.patch, YARN-8925.005.patch
>
>
> Currently if distributed node attributes exist, even though there is no 
> change, updating for distributed node attributes will happen in every 
> heartbeat between NM and RM. Updating process will hold 
> NodeAttributesManagerImpl#writeLock and may have some influence in a large 
> cluster. We have found nodes UI of a large cluster is opened slowly and most 
> time it's waiting for the lock in NodeAttributesManagerImpl. I think this 
> updating should be called only when necessary to enhance the performance of 
> related process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8925) Updating distributed node attributes only when necessary

2018-11-10 Thread Tao Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-8925:
---
Attachment: YARN-8925.005.patch

> Updating distributed node attributes only when necessary
> 
>
> Key: YARN-8925
> URL: https://issues.apache.org/jira/browse/YARN-8925
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.2.1
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
>  Labels: performance
> Attachments: YARN-8925.001.patch, YARN-8925.002.patch, 
> YARN-8925.003.patch, YARN-8925.004.patch, YARN-8925.005.patch
>
>
> Currently if distributed node attributes exist, even though there is no 
> change, updating for distributed node attributes will happen in every 
> heartbeat between NM and RM. Updating process will hold 
> NodeAttributesManagerImpl#writeLock and may have some influence in a large 
> cluster. We have found nodes UI of a large cluster is opened slowly and most 
> time it's waiting for the lock in NodeAttributesManagerImpl. I think this 
> updating should be called only when necessary to enhance the performance of 
> related process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org