[jira] [Commented] (YARN-4166) Support changing container cpu resource

2017-04-18 Thread Yang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15974130#comment-15974130
 ] 

Yang Wang commented on YARN-4166:
-

We want to use container resize(YARN-1197) in production ASAP.  And I already 
have a patch for this JIRA.
[~Naganarasimha] Would you mind take a look?

> Support changing container cpu resource
> ---
>
> Key: YARN-4166
> URL: https://issues.apache.org/jira/browse/YARN-4166
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, nodemanager, resourcemanager
>Reporter: Jian He
>Assignee: Naganarasimha G R
>
> Memory resizing is now supported, we need to support the same for cpu.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6402) Move 'Long Running Services' to an independent tab at top level for new Yarn UI

2017-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15974121#comment-15974121
 ] 

Hadoop QA commented on YARN-6402:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6402 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12863958/YARN-6402.0005.patch |
| Optional Tests |  asflicense  |
| uname | Linux 97d7704105f1 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8c81a16 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15669/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Move 'Long Running Services' to an independent tab at top level for new Yarn 
> UI
> ---
>
> Key: YARN-6402
> URL: https://issues.apache.org/jira/browse/YARN-6402
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Reporter: Sunil G
>Assignee: Akhil PB
> Attachments: YARN-6402.0001.patch, YARN-6402.0002.patch, 
> YARN-6402.0003.patch, YARN-6402.0004.patch, YARN-6402.0005.patch
>
>
> Currently 'Long Running Services' is a sub link in 'Applications' page. 
> Services could be made top level



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6402) Move 'Long Running Services' to an independent tab at top level for new Yarn UI

2017-04-18 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-6402:
---
Attachment: YARN-6402.0005.patch

v5 patch

> Move 'Long Running Services' to an independent tab at top level for new Yarn 
> UI
> ---
>
> Key: YARN-6402
> URL: https://issues.apache.org/jira/browse/YARN-6402
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Reporter: Sunil G
>Assignee: Akhil PB
> Attachments: YARN-6402.0001.patch, YARN-6402.0002.patch, 
> YARN-6402.0003.patch, YARN-6402.0004.patch, YARN-6402.0005.patch
>
>
> Currently 'Long Running Services' is a sub link in 'Applications' page. 
> Services could be made top level



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5418) When partial log aggregation is enabled, display the list of aggregated files on the container log page

2017-04-18 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15974102#comment-15974102
 ] 

Xuan Gong commented on YARN-5418:
-

fix the testcase failure and findbug issues

> When partial log aggregation is enabled, display the list of aggregated files 
> on the container log page
> ---
>
> Key: YARN-5418
> URL: https://issues.apache.org/jira/browse/YARN-5418
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Attachments: Screen Shot 2017-03-06 at 1.38.04 PM.png, 
> YARN-5418.1.patch, YARN-5418.2.patch, YARN-5418.3.patch
>
>
> The container log pages lists all files. However, as soon as a file gets 
> aggregated - it's no longer available on this listing page.
> It will be useful to list aggregated files as well as the current set of 
> files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5418) When partial log aggregation is enabled, display the list of aggregated files on the container log page

2017-04-18 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5418:

Attachment: YARN-5418.3.patch

> When partial log aggregation is enabled, display the list of aggregated files 
> on the container log page
> ---
>
> Key: YARN-5418
> URL: https://issues.apache.org/jira/browse/YARN-5418
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Attachments: Screen Shot 2017-03-06 at 1.38.04 PM.png, 
> YARN-5418.1.patch, YARN-5418.2.patch, YARN-5418.3.patch
>
>
> The container log pages lists all files. However, as soon as a file gets 
> aggregated - it's no longer available on this listing page.
> It will be useful to list aggregated files as well as the current set of 
> files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6492) Generate queue metrics for each partition

2017-04-18 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15974052#comment-15974052
 ] 

Naganarasimha G R commented on YARN-6492:
-

bq.  but one thing I was thinking was since the QueueMetric will be for each 
partition, it would also be useful to have a QueueMetric which aggregates 
across all partitions. 
[~jhung] Actually i was planning to implement as mentioned in the YARN-6195 
[comment | 
https://issues.apache.org/jira/secure/EditComment!default.jspa?id=13043189=15955716],
 further there is no point in aggregating across the partitions as for a given 
queue->app->container request can be allocated to *any one* of the partition. 
As of now CS support Partitions as labelled pools hence allocation doesn't 
happen across partitions and happens only on the named partition. so IMO 
aggregated across can not be done.

> Generate queue metrics for each partition
> -
>
> Key: YARN-6492
> URL: https://issues.apache.org/jira/browse/YARN-6492
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Naganarasimha G R
>
> We are interested in having queue metrics for all partitions. Right now each 
> queue has one QueueMetrics object which captures metrics either in default 
> partition or across all partitions. (After YARN-6467 it will be in default 
> partition)
> But having the partition metrics would be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6335) Port slider's groovy unit tests to yarn native services

2017-04-18 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15974021#comment-15974021
 ] 

Jian He edited comment on YARN-6335 at 4/19/17 3:57 AM:


bq. This normalization is specifically for capping the resource request at the 
maximum value allowed by YARN. Slider used to automatically lower the resource 
request when it was too high
I see, the config name is confusing then. Normalization is always done in yarn 
no matter whether client does it or not. IMO, there's difference between 
normalization and validation. The validation for invalid requests should always 
be done at job submission to fail such requests upfront. This is what the API 
GetNewApplicationResponse#getMaximumResourceCapability meant for. Silently let 
the app continue with an invalid resource requests is just making user 
confusing later. An arbitrary large resource request should always not be 
allowed similar to a negative resource.

btw, the original implementation in some places like this are also not generic 
to take other resource types(e.g. cpus) into account, the 
org.apache.hadoop.yarn.util.resource.Resources API should be used for such 
resource calculations. 


was (Author: jianhe):
bq. This normalization is specifically for capping the resource request at the 
maximum value allowed by YARN. Slider used to automatically lower the resource 
request when it was too high
I see, the config name is confusing then. Normalization is always done in yarn 
no matter whether client does it or not. IMO, the validation should be done at 
job submission to always fail such requests upfront. This is what the API 
GetNewApplicationResponse#getMaximumResourceCapability meant for. Silently let 
the app continue with an invalid resource requests is just making user 
confusing later.  An arbitrary large resource request should always not be 
allowed similar to a negative resource.

btw, the original implementation in some places like this are also not generic 
to take other resource types(e.g. cpus) into account, the 
org.apache.hadoop.yarn.util.resource.Resources API should be used for such 
resource calculations. 

> Port slider's groovy unit tests to yarn native services
> ---
>
> Key: YARN-6335
> URL: https://issues.apache.org/jira/browse/YARN-6335
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-6335-yarn-native-services.001.patch, 
> YARN-6335-yarn-native-services.002.patch, 
> YARN-6335-yarn-native-services.003.patch, 
> YARN-6335-yarn-native-services.004.patch, 
> YARN-6335-yarn-native-services.005.patch, 
> YARN-6335-yarn-native-services.006.patch, 
> YARN-6335-yarn-native-services.007.patch
>
>
> Slider has a lot of useful unit tests implemented in groovy. We could convert 
> these to Java for YARN native services. This scope of this ticket will 
> include unit / minicluster tests only and will not include Slider's funtests 
> which require a running cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6335) Port slider's groovy unit tests to yarn native services

2017-04-18 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15974021#comment-15974021
 ] 

Jian He commented on YARN-6335:
---

bq. This normalization is specifically for capping the resource request at the 
maximum value allowed by YARN. Slider used to automatically lower the resource 
request when it was too high
I see, the config name is confusing then. Normalization is always done in yarn 
no matter whether client does it or not. IMO, the validation should be done at 
job submission to always fail such requests upfront. This is what the API 
GetNewApplicationResponse#getMaximumResourceCapability meant for. Silently let 
the app continue with an invalid resource requests is just making user 
confusing later.  An arbitrary large resource request should always not be 
allowed similar to a negative resource.

btw, the original implementation in some places like this are also not generic 
to take other resource types(e.g. cpus) into account, the 
org.apache.hadoop.yarn.util.resource.Resources API should be used for such 
resource calculations. 

> Port slider's groovy unit tests to yarn native services
> ---
>
> Key: YARN-6335
> URL: https://issues.apache.org/jira/browse/YARN-6335
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-6335-yarn-native-services.001.patch, 
> YARN-6335-yarn-native-services.002.patch, 
> YARN-6335-yarn-native-services.003.patch, 
> YARN-6335-yarn-native-services.004.patch, 
> YARN-6335-yarn-native-services.005.patch, 
> YARN-6335-yarn-native-services.006.patch, 
> YARN-6335-yarn-native-services.007.patch
>
>
> Slider has a lot of useful unit tests implemented in groovy. We could convert 
> these to Java for YARN native services. This scope of this ticket will 
> include unit / minicluster tests only and will not include Slider's funtests 
> which require a running cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6492) Generate queue metrics for each partition

2017-04-18 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15974011#comment-15974011
 ] 

Jonathan Hung commented on YARN-6492:
-

Thanks [~Naganarasimha]!

Not sure how you implemented this, but one thing I was thinking was since the 
QueueMetric will be for each partition, it would also be useful to have a 
QueueMetric which aggregates across all partitions. If you weren't planning on 
address this in this jira we can handle it in another.

> Generate queue metrics for each partition
> -
>
> Key: YARN-6492
> URL: https://issues.apache.org/jira/browse/YARN-6492
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Naganarasimha G R
>
> We are interested in having queue metrics for all partitions. Right now each 
> queue has one QueueMetrics object which captures metrics either in default 
> partition or across all partitions. (After YARN-6467 it will be in default 
> partition)
> But having the partition metrics would be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6482) TestSLSRunner runs but doesn't executed jobs (.json parsing issue)

2017-04-18 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu reassigned YARN-6482:


Assignee: Yuanbo Liu

> TestSLSRunner runs but doesn't executed jobs (.json parsing issue)
> --
>
> Key: YARN-6482
> URL: https://issues.apache.org/jira/browse/YARN-6482
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Carlo Curino
>Assignee: Yuanbo Liu
>Priority: Minor
>
> The TestSLSRunner runs correctly brining up and RM, but the parsing of the 
> rumen trace fails somehow silently, and no nodes nor jobs are loaded. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6482) TestSLSRunner runs but doesn't executed jobs (.json parsing issue)

2017-04-18 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973987#comment-15973987
 ] 

Yuanbo Liu commented on YARN-6482:
--

Take it over, this defect is introduced by YARN-4612. 

> TestSLSRunner runs but doesn't executed jobs (.json parsing issue)
> --
>
> Key: YARN-6482
> URL: https://issues.apache.org/jira/browse/YARN-6482
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Carlo Curino
>Priority: Minor
>
> The TestSLSRunner runs correctly brining up and RM, but the parsing of the 
> rumen trace fails somehow silently, and no nodes nor jobs are loaded. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6493) Print node partition in assignContainer logs

2017-04-18 Thread Jonathan Hung (JIRA)
Jonathan Hung created YARN-6493:
---

 Summary: Print node partition in assignContainer logs
 Key: YARN-6493
 URL: https://issues.apache.org/jira/browse/YARN-6493
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.8.0, 2.7.4, 2.6.6
Reporter: Jonathan Hung
Assignee: Jonathan Hung


It would be useful to have the node's partition when logging a container 
allocation, for tracking purposes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6492) Generate queue metrics for each partition

2017-04-18 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973964#comment-15973964
 ] 

Naganarasimha G R commented on YARN-6492:
-

Thanks for raising this [~jhung], i was planning to raise this issue shortly 
based on confirmation from [~jlowe]. Will try to upload the patch at the 
earliest.

> Generate queue metrics for each partition
> -
>
> Key: YARN-6492
> URL: https://issues.apache.org/jira/browse/YARN-6492
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Naganarasimha G R
>
> We are interested in having queue metrics for all partitions. Right now each 
> queue has one QueueMetrics object which captures metrics either in default 
> partition or across all partitions. (After YARN-6467 it will be in default 
> partition)
> But having the partition metrics would be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6492) Generate queue metrics for each partition

2017-04-18 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R reassigned YARN-6492:
---

Assignee: Naganarasimha G R

> Generate queue metrics for each partition
> -
>
> Key: YARN-6492
> URL: https://issues.apache.org/jira/browse/YARN-6492
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Naganarasimha G R
>
> We are interested in having queue metrics for all partitions. Right now each 
> queue has one QueueMetrics object which captures metrics either in default 
> partition or across all partitions. (After YARN-6467 it will be in default 
> partition)
> But having the partition metrics would be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6467) CSQueueMetrics needs to update the current metrics for default partition only

2017-04-18 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973934#comment-15973934
 ] 

Jonathan Hung commented on YARN-6467:
-

We are interested in having queue metrics for each partition. I created 
YARN-6492 for this task.

> CSQueueMetrics needs to update the current metrics for default partition only
> -
>
> Key: YARN-6467
> URL: https://issues.apache.org/jira/browse/YARN-6467
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha2
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6467.001.patch
>
>
> As a followup to YARN-6195, we need to update existing metrics to only 
> default Partition.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6492) Generate queue metrics for each partition

2017-04-18 Thread Jonathan Hung (JIRA)
Jonathan Hung created YARN-6492:
---

 Summary: Generate queue metrics for each partition
 Key: YARN-6492
 URL: https://issues.apache.org/jira/browse/YARN-6492
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Jonathan Hung


We are interested in having queue metrics for all partitions. Right now each 
queue has one QueueMetrics object which captures metrics either in default 
partition or across all partitions. (After YARN-6467 it will be in default 
partition)

But having the partition metrics would be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6491) Move totalMB and totalVirtualCores computation from ClusterMetricsInfo to QueueMetrics

2017-04-18 Thread Jonathan Hung (JIRA)
Jonathan Hung created YARN-6491:
---

 Summary: Move totalMB and totalVirtualCores computation from 
ClusterMetricsInfo to QueueMetrics
 Key: YARN-6491
 URL: https://issues.apache.org/jira/browse/YARN-6491
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Jonathan Hung


Right now in ClusterMetricsInfo.java we do this:{noformat}if (rs instanceof 
CapacityScheduler) {
  this.totalMB = metrics.getTotalMB();
  this.totalVirtualCores = metrics.getTotalVirtualCores();
} else {
  this.totalMB = availableMB + allocatedMB;
  this.totalVirtualCores = availableVirtualCores + allocatedVirtualCores;
}{noformat}

We'd like to have totalMB and totalVirtualCores as QueueMetrics fields. But 
since QueueMetrics is scheduler agnostic we can't really just move this. It 
seems the way totalMB and totalVirtualCores is computed across FS and CS should 
be standardized.

Right now CS does not include reservedMB in allocatedMB, while FS does (as far 
as I can tell). At least in <= 2.7, when a container is reserved, 
queueUsage.getUsed is incremented, which is the value used to determine if a 
queue can be assigned (AbstractCSQueue#canAssignToThisQueue). So I think it 
makes sense to increment allocatedMB when a container is reserved, and not 
increment it when a reserved container is allocated, to reflect the fact that 
if allocated + reserved > queueLimit, allocation will fail, so the allocatedMB 
metric should also be > queueLimit.

Still not sure if the same is true >= 2.8. Would appreciate any input on this 
(or any of the mentioned proposals).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4359) Update LowCost agents logic to take advantage of YARN-4358

2017-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973906#comment-15973906
 ] 

Hadoop QA commented on YARN-4359:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 13 new + 77 unchanged - 10 fixed = 90 total (was 87) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 44 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 878 unchanged - 2 fixed = 878 total (was 880) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 41s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-4359 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12863913/YARN-4359.14.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f85bf72bd798 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / af8e984 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15667/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/15667/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15667/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 

[jira] [Commented] (YARN-6467) CSQueueMetrics needs to update the current metrics for default partition only

2017-04-18 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973905#comment-15973905
 ] 

Naganarasimha G R commented on YARN-6467:
-

bq. We can defer adding the partition dimension to the queue metrics in a 
separate JIRA. I had already assumed that was the case based on this JIRA's 
title.
Thanks [~jlowe], Me too started with the same intention so that we do not mix 
up new features with the existing and we can safely apply to lower versions, 
but on second thoughts as the current jira modifications were small so just 
wanted to confirm the same.


> CSQueueMetrics needs to update the current metrics for default partition only
> -
>
> Key: YARN-6467
> URL: https://issues.apache.org/jira/browse/YARN-6467
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha2
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6467.001.patch
>
>
> As a followup to YARN-6195, we need to update existing metrics to only 
> default Partition.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4359) Update LowCost agents logic to take advantage of YARN-4358

2017-04-18 Thread Ishai Menache (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishai Menache updated YARN-4359:

Attachment: YARN-4359.14.patch

> Update LowCost agents logic to take advantage of YARN-4358
> --
>
> Key: YARN-4359
> URL: https://issues.apache.org/jira/browse/YARN-4359
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Ishai Menache
>  Labels: oct16-hard
> Attachments: YARN-4359.0.patch, YARN-4359.10.patch, 
> YARN-4359.11.patch, YARN-4359.12.patch, YARN-4359.13.patch, 
> YARN-4359.14.patch, YARN-4359.3.patch, YARN-4359.4.patch, YARN-4359.5.patch, 
> YARN-4359.6.patch, YARN-4359.7.patch, YARN-4359.8.patch, YARN-4359.9.patch
>
>
> Given the improvements of YARN-4358, the LowCost agent should be improved to 
> leverage this, and operate on RLESparseResourceAllocation (ideally leveraging 
> the improvements of YARN-3454 to compute avaialable resources)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4359) Update LowCost agents logic to take advantage of YARN-4358

2017-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973707#comment-15973707
 ] 

Hadoop QA commented on YARN-4359:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
58s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 39s{color} 
| {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 8 unchanged - 0 fixed = 9 total (was 8) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 13 new + 77 unchanged - 10 fixed = 90 total (was 87) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 46 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 878 unchanged - 2 fixed = 878 total (was 880) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 41m 
24s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-4359 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12863893/YARN-4359.13.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e235bc8844f7 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / af8e984 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/15666/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15666/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/15666/artifact/patchprocess/whitespace-eol.txt
 |

[jira] [Commented] (YARN-6365) slsrun.sh creating random html directories

2017-04-18 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973696#comment-15973696
 ] 

Andrew Wang commented on YARN-6365:
---

LGTM +1, built hadoop-sls and saw that the HTML files are present in the target 
directory.

Yufei, assume you did manual testing of the realtimetrack.json / 
showSimulationTrace.html steps described in the docs?

> slsrun.sh creating random html directories
> --
>
> Key: YARN-6365
> URL: https://issues.apache.org/jira/browse/YARN-6365
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha3
>Reporter: Allen Wittenauer
>Assignee: Yufei Gu
>Priority: Blocker
> Attachments: YARN-6365.001.patch
>
>
> YARN-6275 causes slsrun.sh to randomly create or override html directories 
> wherever it is run.  
> {code}
> # copy 'html' directory to current directory to make sure web sever can access
> cp -r "${bin}/../html" "$(pwd)"
> {code}
> Instead, the Java could should be changed to take a system property that 
> slsrun can populate at run time.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6466) Provide shaded framework jar for containers

2017-04-18 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973649#comment-15973649
 ] 

Sangjin Lee commented on YARN-6466:
---

I think the separate discussion that's happening on HADOOP-11656 is informative.

I would very much like to keep the isolating classloader feature at least for 
the container runtime. I have the main part of the work in the reviewable state 
on HADOOP-13398. Also, please note that the isolating classloader feature is an 
existing feature that works and many folks use. We're basically adding the 
stricter behavior for 3.0. I think it would be a loss if we abandon that. Let's 
discuss this.

bq. We probably don't need much of a footprint within the container in the 
first place.

I'm not quite sure if I understand this. Could you kindly elaborate?

> Provide shaded framework jar for containers
> ---
>
> Key: YARN-6466
> URL: https://issues.apache.org/jira/browse/YARN-6466
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: build, yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Sean Busbey
>
> We should build on the existing shading work to provide a jar with all of the 
> bits needed within a YARN application's container to talk to the resource 
> manager and node manager.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-04-18 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973638#comment-15973638
 ] 

Jason Lowe commented on YARN-2113:
--

Is a deadzone the proper way to fix this?  I'm thinking of a case where the 
user has a particularly large container, larger than the dead zone.  It will 
still flap in this case, correct?  Seems like we should preempt not until we 
fall below the user limit but instead until the _next_ container we would 
preempt would put the user at or below their limit.  The scheduler essentially 
entitles a user to one container beyond the user limit.  If we preempt down to 
a point at or below the user's limit then we've gone one container too far, and 
the scheduler could very well turn around and give the container right back.

Preempting down to one container before we meet or dip below the user limit has 
the advantage that there's not yet another config to setup correctly.  However 
it brings up an interesting scenario where killling the youngest container 
would lower the utilization below the user's limit but killing older, smaller 
containers would not.

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Attachments: 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5418) When partial log aggregation is enabled, display the list of aggregated files on the container log page

2017-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973620#comment-15973620
 ] 

Hadoop QA commented on YARN-5418:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
6s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  4s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 25 new + 11 unchanged - 8 fixed = 36 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
18s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
45s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common 
generated 23 new + 4558 unchanged - 17 fixed = 4581 total (was 4575) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
29s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 14 new + 231 unchanged - 0 fixed = 245 total (was 231) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 48s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
47s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
33s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
|  |  Should 
org.apache.hadoop.yarn.server.nodemanager.webapp.ContainerLogsPage$ContainersLogsBlock$NMAggregatedLogsBlockRender
 be a _static_ inner class?  At ContainerLogsPage.java:inner class?  At 
ContainerLogsPage.java:[lines 387-401] |
| Failed junit tests | hadoop.yarn.logaggregation.TestAggregatedLogsBlock |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA 

[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-04-18 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973591#comment-15973591
 ] 

Wangda Tan commented on YARN-2113:
--

I think we can tentatively add a per-queue preemption setting to add the ratio 
dead zone of user limit preemption. And refine it later, Thoughts? [~sunilg].

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Attachments: 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4359) Update LowCost agents logic to take advantage of YARN-4358

2017-04-18 Thread Ishai Menache (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishai Menache updated YARN-4359:

Attachment: YARN-4359.13.patch

> Update LowCost agents logic to take advantage of YARN-4358
> --
>
> Key: YARN-4359
> URL: https://issues.apache.org/jira/browse/YARN-4359
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Ishai Menache
>  Labels: oct16-hard
> Attachments: YARN-4359.0.patch, YARN-4359.10.patch, 
> YARN-4359.11.patch, YARN-4359.12.patch, YARN-4359.13.patch, 
> YARN-4359.3.patch, YARN-4359.4.patch, YARN-4359.5.patch, YARN-4359.6.patch, 
> YARN-4359.7.patch, YARN-4359.8.patch, YARN-4359.9.patch
>
>
> Given the improvements of YARN-4358, the LowCost agent should be improved to 
> leverage this, and operate on RLESparseResourceAllocation (ideally leveraging 
> the improvements of YARN-3454 to compute avaialable resources)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-04-18 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973473#comment-15973473
 ] 

Eric Payne commented on YARN-2113:
--

bq. this is a valid corner case to me, ...  I suggest to move it to a separate 
JIRA
Adding a deadzone configuration doesn't sound too difficult, and since I can 
make this happen with regularity, can we put that into this JIRA?

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Attachments: 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6467) CSQueueMetrics needs to update the current metrics for default partition only

2017-04-18 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973466#comment-15973466
 ] 

Jason Lowe commented on YARN-6467:
--

bq. I thought of segregating partition based queue metrics in a different jira

I'm totally OK with fixing the queue metrics so they only show the default 
partition in this jira, assuming those metrics aren't doing anything sane today 
in light of multiple partitions.  We can defer adding the partition dimension 
to the queue metrics in a separate JIRA.  I had already assumed that was the 
case based on this JIRA's title.


> CSQueueMetrics needs to update the current metrics for default partition only
> -
>
> Key: YARN-6467
> URL: https://issues.apache.org/jira/browse/YARN-6467
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha2
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6467.001.patch
>
>
> As a followup to YARN-6195, we need to update existing metrics to only 
> default Partition.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5418) When partial log aggregation is enabled, display the list of aggregated files on the container log page

2017-04-18 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5418:

Attachment: YARN-5418.2.patch

rebase the patch

> When partial log aggregation is enabled, display the list of aggregated files 
> on the container log page
> ---
>
> Key: YARN-5418
> URL: https://issues.apache.org/jira/browse/YARN-5418
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Attachments: Screen Shot 2017-03-06 at 1.38.04 PM.png, 
> YARN-5418.1.patch, YARN-5418.2.patch
>
>
> The container log pages lists all files. However, as soon as a file gets 
> aggregated - it's no longer available on this listing page.
> It will be useful to list aggregated files as well as the current set of 
> files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5894) license warning in de.ruedigermoeller:fst:jar:2.24

2017-04-18 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973430#comment-15973430
 ] 

Ray Chiang commented on YARN-5894:
--

[~jeagles], it looks like this was brought in as part of YARN-3448.  Did we get 
some special permission for this license issue?  It seems like this would need 
to be corrected before final 3.0 release.

> license warning in de.ruedigermoeller:fst:jar:2.24
> --
>
> Key: YARN-5894
> URL: https://issues.apache.org/jira/browse/YARN-5894
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Priority: Blocker
>
> The artifact de.ruedigermoeller:fst:jar:2.24, that ApplicationHistoryService 
> depends on,  shows its license being LGPL 2.1 in our license checking.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6467) CSQueueMetrics needs to update the current metrics for default partition only

2017-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973409#comment-15973409
 ] 

Hadoop QA commented on YARN-6467:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 46 unchanged - 0 fixed = 47 total (was 46) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 32s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6467 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12863856/YARN-6467.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e4348823d7c8 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / af8e984 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15664/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15664/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15664/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-6363) Extending SLS: Synthetic Load Generator

2017-04-18 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973374#comment-15973374
 ] 

Wangda Tan commented on YARN-6363:
--

Thanks [~curino], I will commit the latest patch tomorrow if no opposite 
opinions.

> Extending SLS: Synthetic Load Generator
> ---
>
> Key: YARN-6363
> URL: https://issues.apache.org/jira/browse/YARN-6363
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-6363 overview.pdf, YARN-6363.v0.patch, 
> YARN-6363.v10.patch, YARN-6363.v11.patch, YARN-6363.v12.patch, 
> YARN-6363.v13.patch, YARN-6363.v14.patch, YARN-6363.v15.patch, 
> YARN-6363.v16.patch, YARN-6363.v17.patch, YARN-6363.v1.patch, 
> YARN-6363.v2.patch, YARN-6363.v3.patch, YARN-6363.v4.patch, 
> YARN-6363.v5.patch, YARN-6363.v6.patch, YARN-6363.v7.patch, YARN-6363.v9.patch
>
>
> This JIRA tracks the introduction of a synthetic load generator in the SLS. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6344) Add parameter for rack locality delay in CapacityScheduler

2017-04-18 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973349#comment-15973349
 ] 

Konstantinos Karanasos commented on YARN-6344:
--

[~Huangkx6810], as [~wangda] said, please let us know if it worked on 2.8 (I 
tested it locally when doing the porting).
Then we can discuss about 2.7 if it is really required. Also, let us know if it 
helped with YARN-6289 (and if not, what did you observe?).

> Add parameter for rack locality delay in CapacityScheduler
> --
>
> Key: YARN-6344
> URL: https://issues.apache.org/jira/browse/YARN-6344
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-6344.001.patch, YARN-6344.002.patch, 
> YARN-6344.003.patch, YARN-6344.004.patch, YARN-6344-branch-2.8.patch
>
>
> When relaxing locality from node to rack, the {{node-locality-parameter}} is 
> used: when scheduling opportunities for a scheduler key are more than the 
> value of this parameter, we relax locality and try to assign the container to 
> a node in the corresponding rack.
> On the other hand, when relaxing locality to off-switch (i.e., assign the 
> container anywhere in the cluster), we are using a {{localityWaitFactor}}, 
> which is computed based on the number of outstanding requests for a specific 
> scheduler key, which is divided by the size of the cluster. 
> In case of applications that request containers in big batches (e.g., 
> traditional MR jobs), and for relatively small clusters, the 
> localityWaitFactor does not affect relaxing locality much.
> However, in case of applications that request containers in small batches, 
> this load factor takes a very small value, which leads to assigning 
> off-switch containers too soon. This situation is even more pronounced in big 
> clusters.
> For example, if an application requests only one container per request, the 
> locality will be relaxed after a single missed scheduling opportunity.
> The purpose of this JIRA is to rethink the way we are relaxing locality for 
> off-switch assignments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6467) CSQueueMetrics needs to update the current metrics for default partition only

2017-04-18 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973345#comment-15973345
 ] 

Naganarasimha G R commented on YARN-6467:
-

Thanks for the patch [~maniraj...@gmail.com],
Currently you have finished for: 
* AMResourceLimitMB, AMResourceLimitVCores, (also for user) 
* usedAMResourceMB, usedAMResourceVCores (also for user) 

but IMO we need to still capture for {{allocatedMB, allocatedVCores, 
availableMB, availableVCores, pendingMB, pendingVCores, reservedMB, 
reservedVCores}} which are all label based. So once you correct for these i 
presume lot of other test cases should fail.

[~jlowe], I thought of segregating partition based queue metrics in a different 
jira so that this fix can be applied in specific to 2.8.1 and 2.7.4 and 
partition based queue metrics available in 2.9 and 3.0 alpha 3. Hope this 
approach is fine, thoughts ?



> CSQueueMetrics needs to update the current metrics for default partition only
> -
>
> Key: YARN-6467
> URL: https://issues.apache.org/jira/browse/YARN-6467
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha2
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6467.001.patch
>
>
> As a followup to YARN-6195, we need to update existing metrics to only 
> default Partition.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6363) Extending SLS: Synthetic Load Generator

2017-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973313#comment-15973313
 ] 

Hadoop QA commented on YARN-6363:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-tools: The patch generated 0 new + 138 
unchanged - 25 fixed = 138 total (was 163) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  2m 
38s{color} | {color:green} The patch generated 0 new + 74 unchanged - 1 fixed = 
74 total (was 75) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-rumen in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
48s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6363 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12863862/YARN-6363.v17.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  shellcheck  shelldocs  |
| uname | Linux ba60955c07b5 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / af8e984 |
| Default Java | 1.8.0_121 |
| shellcheck | v0.4.6 |
| 

[jira] [Commented] (YARN-6451) Add RM monitor validating metrics invariants

2017-04-18 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973243#comment-15973243
 ] 

Carlo Curino commented on YARN-6451:


Thanks [~chris.douglas], I might cherry-pick it back to branch-2 later on.

> Add RM monitor validating metrics invariants
> 
>
> Key: YARN-6451
> URL: https://issues.apache.org/jira/browse/YARN-6451
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6451.v0.patch, YARN-6451.v1.patch, 
> YARN-6451.v2.patch, YARN-6451.v3.patch, YARN-6451.v4.patch, YARN-6451.v5.patch
>
>
> For SLS runs, as well as for live test clusters (and maybe prod), it would be 
> useful to have a mechanism to continuously check whether core invariants of 
> the RM/Scheduler are respected (e.g., no priority inversions, fairness mostly 
> respected, certain latencies within expected range, etc..)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6451) Add RM monitor validating metrics invariants

2017-04-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973230#comment-15973230
 ] 

Hudson commented on YARN-6451:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11601 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11601/])
YARN-6451. Add RM monitor validating metrics invariants. Contributed by 
(cdouglas: rev af8e9842d2ca566528e09d905b609f1cf160d367)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/invariants/TestMetricsInvariantChecker.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/invariants/MetricsInvariantChecker.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/invariants/InvariantsChecker.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/invariants/package-info.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/invariants.txt
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/invariants/InvariantViolationException.java


> Add RM monitor validating metrics invariants
> 
>
> Key: YARN-6451
> URL: https://issues.apache.org/jira/browse/YARN-6451
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6451.v0.patch, YARN-6451.v1.patch, 
> YARN-6451.v2.patch, YARN-6451.v3.patch, YARN-6451.v4.patch, YARN-6451.v5.patch
>
>
> For SLS runs, as well as for live test clusters (and maybe prod), it would be 
> useful to have a mechanism to continuously check whether core invariants of 
> the RM/Scheduler are respected (e.g., no priority inversions, fairness mostly 
> respected, certain latencies within expected range, etc..)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6363) Extending SLS: Synthetic Load Generator

2017-04-18 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-6363:
---
Attachment: YARN-6363.v17.patch

> Extending SLS: Synthetic Load Generator
> ---
>
> Key: YARN-6363
> URL: https://issues.apache.org/jira/browse/YARN-6363
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-6363 overview.pdf, YARN-6363.v0.patch, 
> YARN-6363.v10.patch, YARN-6363.v11.patch, YARN-6363.v12.patch, 
> YARN-6363.v13.patch, YARN-6363.v14.patch, YARN-6363.v15.patch, 
> YARN-6363.v16.patch, YARN-6363.v17.patch, YARN-6363.v1.patch, 
> YARN-6363.v2.patch, YARN-6363.v3.patch, YARN-6363.v4.patch, 
> YARN-6363.v5.patch, YARN-6363.v6.patch, YARN-6363.v7.patch, YARN-6363.v9.patch
>
>
> This JIRA tracks the introduction of a synthetic load generator in the SLS. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3839) Quit throwing NMNotYetReadyException

2017-04-18 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973202#comment-15973202
 ] 

Manikandan R commented on YARN-3839:


[~jlowe], [~jianhe] Thanks for your suggestions.

Attaching patch for review. I've made the changes based on our earlier 
conversations - mostly code cleanup and its corresponding test cases etc. As 
part of this, 
TestNodeManagerResync#testBlockNewContainerRequestsOnStartAndResync() also has 
been cleaned up. Given this, Is it better to write new test cases to validate 
the code (as some other exception would be thrown while NM is restarting. For 
ex, InvalidToken exception would be thrown instead of NMNotYeadyException) 
based on new patch? Please review and let me know your comments.

> Quit throwing NMNotYetReadyException
> 
>
> Key: YARN-3839
> URL: https://issues.apache.org/jira/browse/YARN-3839
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Karthik Kambatla
>Assignee: Manikandan R
> Attachments: YARN-3839.001.patch
>
>
> Quit throwing NMNotYetReadyException when NM has not yet registered with the 
> RM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3839) Quit throwing NMNotYetReadyException

2017-04-18 Thread Manikandan R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-3839:
---
Attachment: YARN-3839.001.patch

> Quit throwing NMNotYetReadyException
> 
>
> Key: YARN-3839
> URL: https://issues.apache.org/jira/browse/YARN-3839
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Karthik Kambatla
>Assignee: Manikandan R
> Attachments: YARN-3839.001.patch
>
>
> Quit throwing NMNotYetReadyException when NM has not yet registered with the 
> RM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5892) Capacity Scheduler: Support user-specific minimum user limit percent

2017-04-18 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973200#comment-15973200
 ] 

Wangda Tan commented on YARN-5892:
--

Thanks [~eepayne] for your detailed explanations:

bq. No, that's not how it will work with this implementation.
Yeah you're correct, it works with this implementation.

bq. Instead of a warning, I will cause the parse of the CS config to fail if 
weight is < 0
Sounds good. 

bq. I am afraid that I disagree for reasons stated above. #2 can be addressed 
with a simple check
It was a typo in my previous comment, it should be {{#1}} and as you commented, 
it is not an issue.



> Capacity Scheduler: Support user-specific minimum user limit percent
> 
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch, YARN-5892.006.patch, YARN-5892.007.patch, 
> YARN-5892.008.patch, YARN-5892.009.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5892) Capacity Scheduler: Support user-specific minimum user limit percent

2017-04-18 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973191#comment-15973191
 ] 

Wangda Tan commented on YARN-5892:
--

[~jlowe], thanks for your comments.

bq. I think the weight needs to apply to the user limit factor as well ...
This quite make sense to me, and weight = 0 means the user cannot use any 
resource at all. Which is also consistent to your previous comment:
bq. A practical use of this could be to essentially "pause" a user in a queue

Regarding to weights less than 1, I agree with this part: 
bq. I worry that the longer we keep support for it out of the codebase the more 
difficult it can become to introduce it later. 
Also as mentioned by [~eepayne], I think this is not a problem as well:
bq. When there're several active users with weight < 1.

So I'm OK with setting weight to any value >= 0.



> Capacity Scheduler: Support user-specific minimum user limit percent
> 
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch, YARN-5892.006.patch, YARN-5892.007.patch, 
> YARN-5892.008.patch, YARN-5892.009.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5892) Capacity Scheduler: Support user-specific minimum user limit percent

2017-04-18 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973168#comment-15973168
 ] 

Jason Lowe commented on YARN-5892:
--

bq. Also, weight of users applies to hard limit of user (user limit factor) as 
well. This is a gray area to me, since it may cause some issue of resource 
planning (one more factor apply to maximum resource of user).

I think the weight needs to apply to the user limit factor as well.  
Semantically a user with a weight of 2 should be equivalent to spreading that 
user's load across two "normal" users.  That means a user of weight 2 should 
get twice the normal limit factor, since two users who both hit their ULF means 
twice the ULF load was allocated to the queue.  If we don't apply the weight to 
the ULF as well then the math isn't consistent -- the 2x user isn't exactly 
like having two users sharing a load.


> Capacity Scheduler: Support user-specific minimum user limit percent
> 
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch, YARN-5892.006.patch, YARN-5892.007.patch, 
> YARN-5892.008.patch, YARN-5892.009.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6467) CSQueueMetrics needs to update the current metrics for default partition only

2017-04-18 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973155#comment-15973155
 ] 

Manikandan R commented on YARN-6467:


[~Naganarasimha],

Attaching draft patch for review. Expecting test cases like 
TestLeafQueue#testAppAttemptMetrics to fail, but ran without any errors. Please 
validate.

> CSQueueMetrics needs to update the current metrics for default partition only
> -
>
> Key: YARN-6467
> URL: https://issues.apache.org/jira/browse/YARN-6467
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha2
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6467.001.patch
>
>
> As a followup to YARN-6195, we need to update existing metrics to only 
> default Partition.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6467) CSQueueMetrics needs to update the current metrics for default partition only

2017-04-18 Thread Manikandan R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-6467:
---
Attachment: YARN-6467.001.patch

> CSQueueMetrics needs to update the current metrics for default partition only
> -
>
> Key: YARN-6467
> URL: https://issues.apache.org/jira/browse/YARN-6467
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha2
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6467.001.patch
>
>
> As a followup to YARN-6195, we need to update existing metrics to only 
> default Partition.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6335) Port slider's groovy unit tests to yarn native services

2017-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973153#comment-15973153
 ] 

Hadoop QA commented on YARN-6335:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 110 new or modified 
test files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
15s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
35s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} yarn-native-services passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications generated 0 new + 30 
unchanged - 6 fixed = 30 total (was 36) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications: The patch generated 
39 new + 1198 unchanged - 154 fixed = 1237 total (was 1352) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 29s{color} 
| {color:red} hadoop-yarn-slider-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
slider.server.appmaster.timelineservice.TestServiceTimelinePublisher |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6335 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12863849/YARN-6335-yarn-native-services.007.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  cc  |
| uname | Linux 33d65d83dd00 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| 

[jira] [Updated] (YARN-6451) Add RM monitor validating metrics invariants

2017-04-18 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated YARN-6451:

Issue Type: New Feature  (was: Bug)

> Add RM monitor validating metrics invariants
> 
>
> Key: YARN-6451
> URL: https://issues.apache.org/jira/browse/YARN-6451
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-6451.v0.patch, YARN-6451.v1.patch, 
> YARN-6451.v2.patch, YARN-6451.v3.patch, YARN-6451.v4.patch, YARN-6451.v5.patch
>
>
> For SLS runs, as well as for live test clusters (and maybe prod), it would be 
> useful to have a mechanism to continuously check whether core invariants of 
> the RM/Scheduler are respected (e.g., no priority inversions, fairness mostly 
> respected, certain latencies within expected range, etc..)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6451) Add RM monitor validating metrics invariants

2017-04-18 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated YARN-6451:

Summary: Add RM monitor validating metrics invariants  (was: Create a 
monitor to check whether we maintain RM (scheduling) invariants)

> Add RM monitor validating metrics invariants
> 
>
> Key: YARN-6451
> URL: https://issues.apache.org/jira/browse/YARN-6451
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-6451.v0.patch, YARN-6451.v1.patch, 
> YARN-6451.v2.patch, YARN-6451.v3.patch, YARN-6451.v4.patch, YARN-6451.v5.patch
>
>
> For SLS runs, as well as for live test clusters (and maybe prod), it would be 
> useful to have a mechanism to continuously check whether core invariants of 
> the RM/Scheduler are respected (e.g., no priority inversions, fairness mostly 
> respected, certain latencies within expected range, etc..)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6335) Port slider's groovy unit tests to yarn native services

2017-04-18 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-6335:
-
Attachment: YARN-6335-yarn-native-services.007.patch

Patch 007 should fix the javac and some checkstyle issues. I think the unit 
test failure is due to the recent rebase and is not introduced by this patch.

> Port slider's groovy unit tests to yarn native services
> ---
>
> Key: YARN-6335
> URL: https://issues.apache.org/jira/browse/YARN-6335
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-6335-yarn-native-services.001.patch, 
> YARN-6335-yarn-native-services.002.patch, 
> YARN-6335-yarn-native-services.003.patch, 
> YARN-6335-yarn-native-services.004.patch, 
> YARN-6335-yarn-native-services.005.patch, 
> YARN-6335-yarn-native-services.006.patch, 
> YARN-6335-yarn-native-services.007.patch
>
>
> Slider has a lot of useful unit tests implemented in groovy. We could convert 
> these to Java for YARN native services. This scope of this ticket will 
> include unit / minicluster tests only and will not include Slider's funtests 
> which require a running cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6344) Add parameter for rack locality delay in CapacityScheduler

2017-04-18 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973084#comment-15973084
 ] 

Wangda Tan commented on YARN-6344:
--

[~Huangkx6810], if you could help to verify it works in 2.8, I can help with 
review and merge to branch-2.8. For 2.7 it might be tricker since code is 
pretty diverged between 2.8 and 2.7, so I think more efforts required to back 
port 2.7.x.

> Add parameter for rack locality delay in CapacityScheduler
> --
>
> Key: YARN-6344
> URL: https://issues.apache.org/jira/browse/YARN-6344
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-6344.001.patch, YARN-6344.002.patch, 
> YARN-6344.003.patch, YARN-6344.004.patch, YARN-6344-branch-2.8.patch
>
>
> When relaxing locality from node to rack, the {{node-locality-parameter}} is 
> used: when scheduling opportunities for a scheduler key are more than the 
> value of this parameter, we relax locality and try to assign the container to 
> a node in the corresponding rack.
> On the other hand, when relaxing locality to off-switch (i.e., assign the 
> container anywhere in the cluster), we are using a {{localityWaitFactor}}, 
> which is computed based on the number of outstanding requests for a specific 
> scheduler key, which is divided by the size of the cluster. 
> In case of applications that request containers in big batches (e.g., 
> traditional MR jobs), and for relatively small clusters, the 
> localityWaitFactor does not affect relaxing locality much.
> However, in case of applications that request containers in small batches, 
> this load factor takes a very small value, which leads to assigning 
> off-switch containers too soon. This situation is even more pronounced in big 
> clusters.
> For example, if an application requests only one container per request, the 
> locality will be relaxed after a single missed scheduling opportunity.
> The purpose of this JIRA is to rethink the way we are relaxing locality for 
> off-switch assignments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4266) Allow whitelisted users to disable user re-mapping/squashing when launching docker containers

2017-04-18 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972986#comment-15972986
 ] 

Eric Badger commented on YARN-4266:
---

Hey [~tangzhankun], [~luhuichun], wondering if there's any update/what the 
status of this is. Do you have any sort of target date. Thanks!

> Allow whitelisted users to disable user re-mapping/squashing when launching 
> docker containers
> -
>
> Key: YARN-4266
> URL: https://issues.apache.org/jira/browse/YARN-4266
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Zhankun Tang
> Attachments: YARN-4266.001.patch, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v2.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v3.pdf, 
> YARN-4266-branch-2.8.001.patch
>
>
> Docker provides a mechanism (the --user switch) that enables us to specify 
> the user the container processes should run as. We use this mechanism today 
> when launching docker containers . In non-secure mode, we run the docker 
> container based on 
> `yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user` and in 
> secure mode, as the submitting user. However, this mechanism breaks down with 
> a large number of 'pre-created' images which don't necessarily have the users 
> available within the image. Examples of such images include shared images 
> that need to be used by multiple users. We need a way in which we can allow a 
> pre-defined set of users to run containers based on existing images, without 
> using the --user switch. There are some implications of disabling this user 
> squashing that we'll need to work through : log aggregation, artifact 
> deletion etc.,



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6335) Port slider's groovy unit tests to yarn native services

2017-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972938#comment-15972938
 ] 

Hadoop QA commented on YARN-6335:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 110 new or modified 
test files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
58s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
35s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} yarn-native-services passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 37s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications 
generated 4 new + 30 unchanged - 6 fixed = 34 total (was 36) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications: The patch generated 
43 new + 1198 unchanged - 154 fixed = 1241 total (was 1352) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 29s{color} 
| {color:red} hadoop-yarn-slider-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
slider.server.appmaster.timelineservice.TestServiceTimelinePublisher |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6335 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12863827/YARN-6335-yarn-native-services.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  cc  |
| uname | Linux ac7d690fd7ad 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | 

[jira] [Updated] (YARN-6335) Port slider's groovy unit tests to yarn native services

2017-04-18 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-6335:
-
Attachment: YARN-6335-yarn-native-services.006.patch

I tried to address Jian's comments in this patch, as well as some findbugs and 
checkstyle issues. I also found another deprecated client action to be removed.

> Port slider's groovy unit tests to yarn native services
> ---
>
> Key: YARN-6335
> URL: https://issues.apache.org/jira/browse/YARN-6335
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-6335-yarn-native-services.001.patch, 
> YARN-6335-yarn-native-services.002.patch, 
> YARN-6335-yarn-native-services.003.patch, 
> YARN-6335-yarn-native-services.004.patch, 
> YARN-6335-yarn-native-services.005.patch, 
> YARN-6335-yarn-native-services.006.patch
>
>
> Slider has a lot of useful unit tests implemented in groovy. We could convert 
> these to Java for YARN native services. This scope of this ticket will 
> include unit / minicluster tests only and will not include Slider's funtests 
> which require a running cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6335) Port slider's groovy unit tests to yarn native services

2017-04-18 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972851#comment-15972851
 ] 

Billie Rinaldi commented on YARN-6335:
--

bq. Looks like the "yarn.resource.normalization.enabled" will only control 
whether slider AM will normalize the request at client side or not. But, 
regardless of this setting, the normalization will always be done in YARN 
scheduler to round up the resource size.

This normalization is specifically for capping the resource request at the 
maximum value allowed by YARN. Slider used to automatically lower the resource 
request when it was too high, but some users ran into issues because their app 
could not run with lower resources. They preferred the app to fail due to 
requesting resources that were too high, which is why this parameter was 
introduced.

bq. What's the difference between these two paths: the path defined in 
deleteZookeeperNode and the path defined by registryPathForInstance ?

The first one is the ZK node created for the app's use, separate from the 
registry nodes that are used by Slider. In Slider, the DEFAULT_ZK_PATH variable 
was set for the app here: 
https://github.com/apache/incubator-slider/blob/develop/slider-core/src/main/java/org/apache/slider/client/SliderClient.java#L1751-L1775
and here: 
https://github.com/apache/incubator-slider/blob/develop/slider-core/src/main/java/org/apache/slider/core/build/InstanceBuilder.java#L491
but it looks like this code has been removed in YARN native services. We should 
reintroduce this functionality.

bq. Even if we check the existence of app Dir at last line of the method, if 
the create happens to be done right after this check and before the method 
returns, it is still the same problem. To user this just looks as if the create 
immediately happens after destroy. It is a micro optimization, but the 
semantics is still not deterministic.

Okay, I will remove the last check. I am more concerned about the case where 
something went wrong on the Slider side, rather than a create/destroy race 
condition on user side. It is very frustrating as a user for destroy to succeed 
and the HDFS directory still to exist, preventing creation of the app again. 
But since we are checking the return value of fs.delete, this shouldn't happen.

> Port slider's groovy unit tests to yarn native services
> ---
>
> Key: YARN-6335
> URL: https://issues.apache.org/jira/browse/YARN-6335
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-6335-yarn-native-services.001.patch, 
> YARN-6335-yarn-native-services.002.patch, 
> YARN-6335-yarn-native-services.003.patch, 
> YARN-6335-yarn-native-services.004.patch, 
> YARN-6335-yarn-native-services.005.patch
>
>
> Slider has a lot of useful unit tests implemented in groovy. We could convert 
> these to Java for YARN native services. This scope of this ticket will 
> include unit / minicluster tests only and will not include Slider's funtests 
> which require a running cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5892) Capacity Scheduler: Support user-specific minimum user limit percent

2017-04-18 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972821#comment-15972821
 ] 

Jason Lowe commented on YARN-5892:
--

I'm +1 for weight == 0.  As long as it doesn't break the code (e.g.: division 
by zero, etc.) and does something semantically consistent with weights then I 
don't see why we should disallow it.  
A practical use of this could be to essentially "pause" a user in a queue -- it 
won't reject the user's app submissions like changing the queue ACLs would, but 
the user will get very little to no resources until the weight becomes non-zero.

I'm also +1 for having the weight be less than 1.  It looks like it works with 
the patch now, and I worry that the longer we keep support for it out of the 
codebase the more difficult it can become to introduce it later.  People will 
see in the existing code that it cannot be less than 1 and end up assuming 
(explicitly or implicitly) that it never will.


> Capacity Scheduler: Support user-specific minimum user limit percent
> 
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch, YARN-5892.006.patch, YARN-5892.007.patch, 
> YARN-5892.008.patch, YARN-5892.009.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5892) Capacity Scheduler: Support user-specific minimum user limit percent

2017-04-18 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972782#comment-15972782
 ] 

Eric Payne commented on YARN-5892:
--

[~leftnoteasy], thank you very much for your in-depth review and comments.
{quote}
1) When there're several active users with \[combined sum of\] weights < 1. ... 
However in this implementation ... a1 can get all queue's resource (because 
#active-user-applied-weights = 1/0.3) while a2 got starved.
{quote}
No, that's not how it will work with this implementation.

[~sunilg] had a similar question 
[above|https://issues.apache.org/jira/browse/YARN-5892?focusedCommentId=15966197=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15966197].
 Having a combined sum of weights < 1 works because {{userLimitResource}} (the 
return value of {{computeUserLimit}})  is only ever used by 
{{getComputedResourceLimitFor\[Active|All\]Users}}, which multiplies the value 
of {{userLimitResource}} by the appropriate user weight(s). This will result in 
the correct value of {{userLimit}} for each specific user. If the sum of active 
user(s)'s weight(s) is < 1, then it is true that {{userLimitResource}} is 
larger than the actual user limit, and sometimes even larger than the actual 
number of resources used. However, this algorithm calculates {{userLimit}} 
correctly and consistently when 
{{getComputedResourceLimitFor\[Active|All\]Users}} multiplies it by each user's 
weight.

bq. 2) I would like to prevent setting user's weight to <= 0.
Instead of a warning, I will cause the parse of the CS config to fail if weight 
is < 0. I would like [~jlowe]'s and [~nroberts]'s feedback on whether or not 
{{weight == 0}} is reasonable and consistent.
{quote}
Generally speaking, set user weight < 1 is a reasonable requirement however I 
don't think we're ready for that. It looks there're bunch of things we need to 
do to make #2 and related preemption logic works properly.
{quote}
I am afraid that I disagree for reasons stated above. #2 can be addressed with 
a simple check that treats failure the same as other parsing issues. The one 
concern that remains in my mind is to ensure that this algorithm calculates 
{{allUserLimit}} correctly for preemption. I have not yet combined and tested 
this patch with the one for YARN-2113. I will do so and post my findings.
{quote}
Beyond that, I suggest to make #active-users-times-weight can updated in O(1) 
for every changes to active users set or any active user's weight get updated.
{quote}
Yes, good point. Although #active-users-times-weight and #users-times-weight 
are only calculated in {{computeUserLimit}}, and {{computeUserLimit}} is only 
called when a significant event happens, we could eliminate the need to 
calculate this for things like container allocate and container free events. I 
will modify the patch to do this.
{quote}
Also, weight of users applies to hard limit of user (user limit factor) as 
well. This is a gray area to me, since it may cause some issue of resource 
planning (one more factor apply to maximum resource of user). Would like to 
hear thoughts from Jason Lowe/Sunil G as well.
{quote}
I look forward to [~jlowe]'s and [~sunilg]'s comments


> Capacity Scheduler: Support user-specific minimum user limit percent
> 
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch, YARN-5892.006.patch, YARN-5892.007.patch, 
> YARN-5892.008.patch, YARN-5892.009.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (YARN-4166) Support changing container cpu resource

2017-04-18 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972444#comment-15972444
 ] 

Naganarasimha G R commented on YARN-4166:
-

[~fly_in_gis], Earlier in offline discussion, Ming ma was suppose to take it 
over and as we well there more modification happening NM Resource handling , 
hence had stalled it. 
I think i can start of with it now, Will try to update on this shortly.

> Support changing container cpu resource
> ---
>
> Key: YARN-4166
> URL: https://issues.apache.org/jira/browse/YARN-4166
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, nodemanager, resourcemanager
>Reporter: Jian He
>Assignee: Naganarasimha G R
>
> Memory resizing is now supported, we need to support the same for cpu.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4166) Support changing container cpu resource

2017-04-18 Thread Yang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972334#comment-15972334
 ] 

Yang Wang commented on YARN-4166:
-

Hi [~Naganarasimha], 
Are you still working on this, could you share your progress please?

> Support changing container cpu resource
> ---
>
> Key: YARN-4166
> URL: https://issues.apache.org/jira/browse/YARN-4166
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, nodemanager, resourcemanager
>Reporter: Jian He
>Assignee: Naganarasimha G R
>
> Memory resizing is now supported, we need to support the same for cpu.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6335) Port slider's groovy unit tests to yarn native services

2017-04-18 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972248#comment-15972248
 ] 

Jian He commented on YARN-6335:
---

bq. This is pulling in changes made in SLIDER-1201, so that I would not have to 
make significant changes to TestRoleHistoryOutstandingRequestTracker (which 
tests the resource normalization feature).
One confusion about the semantics. Looks like the 
"yarn.resource.normalization.enabled" will only control whether slider AM will 
normalize the request at client side or not.  But, regardless of this setting, 
the normalization will always be done in YARN scheduler to round up the 
resource size. So, this config does not seem solve the mentioned use-case in 
SLIDER-1201 ? Basically, because scheduler is always doing normalization, this 
config is not useful.

- What's the difference between these two paths: the path defined in 
deleteZookeeperNode and the path defined by registryPathForInstance ? The paths 
for these two look very similar, 
{code}
if (!deleteZookeeperNode(appName)) {
  String message =
  "Failed to cleanup cleanup application " + appName + " in zookeeper";
  log.warn(message);
  throw new YarnException(message);
}

//TODO clean registry?
String registryPath = SliderRegistryUtils.registryPathForInstance(
appName);
{code}
bq.  Maybe we should check for the existence of the app directory instead of 
checking for a live app?
Even if we check the existence of app Dir at last line of the method, if the 
create happens to be done right after this check and before the method returns, 
it is still the same problem. To user this just looks as if the create 
immediately happens after destroy. It is a micro optimization, but the 
semantics is still not deterministic.

> Port slider's groovy unit tests to yarn native services
> ---
>
> Key: YARN-6335
> URL: https://issues.apache.org/jira/browse/YARN-6335
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-6335-yarn-native-services.001.patch, 
> YARN-6335-yarn-native-services.002.patch, 
> YARN-6335-yarn-native-services.003.patch, 
> YARN-6335-yarn-native-services.004.patch, 
> YARN-6335-yarn-native-services.005.patch
>
>
> Slider has a lot of useful unit tests implemented in groovy. We could convert 
> these to Java for YARN native services. This scope of this ticket will 
> include unit / minicluster tests only and will not include Slider's funtests 
> which require a running cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6141) ppc64le on Linux doesn't trigger __linux get_executable codepath

2017-04-18 Thread Ayappan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972235#comment-15972235
 ] 

Ayappan commented on YARN-6141:
---

What is the inhibitor here? 

As mentioned earlier, the __linux__ is recommended by gcc community. 
https://gcc.gnu.org/onlinedocs/cpp/System-specific-Predefined-Macros.html#System-specific-Predefined-Macros
Going forward this is going to be the future standard.
This patch won't break any platform but just complying to the standards.



> ppc64le on Linux doesn't trigger __linux get_executable codepath
> 
>
> Key: YARN-6141
> URL: https://issues.apache.org/jira/browse/YARN-6141
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
> Environment: $ uname -a
> Linux f8eef0f055cf 3.16.0-30-generic #40~14.04.1-Ubuntu SMP Thu Jan 15 
> 17:42:36 UTC 2015 ppc64le ppc64le ppc64le GNU/Linux
>Reporter: Sonia Garudi
>  Labels: ppc64le
> Attachments: YARN-6141.patch
>
>
> On ppc64le architecture, the build fails in the 'Hadoop YARN NodeManager' 
> project with the below error:
> Cannot safely determine executable path with a relative HADOOP_CONF_DIR on 
> this operating system.
> [WARNING]  #error Cannot safely determine executable path with a relative 
> HADOOP_CONF_DIR on this operating system.
> [WARNING]   ^
> [WARNING] make[2]: *** 
> [CMakeFiles/container.dir/main/native/container-executor/impl/get_executable.c.o]
>  Error 1
> [WARNING] make[2]: *** Waiting for unfinished jobs
> [WARNING] make[1]: *** [CMakeFiles/container.dir/all] Error 2
> [WARNING] make: *** [all] Error 2
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> Cmake version used :
> $ /usr/bin/cmake --version
> cmake version 2.8.12.2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4359) Update LowCost agents logic to take advantage of YARN-4358

2017-04-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972175#comment-15972175
 ] 

Hadoop QA commented on YARN-4359:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 39s{color} 
| {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 8 unchanged - 0 fixed = 9 total (was 8) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 13 new + 76 unchanged - 10 fixed = 89 total (was 86) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 46 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 878 unchanged - 2 fixed = 878 total (was 880) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 40m  
5s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-4359 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12863739/YARN-4359.12.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2f7e3b390f84 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8dfcd95 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/15660/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15660/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/15660/artifact/patchprocess/whitespace-eol.txt
 |