[jira] [Updated] (YARN-7669) [API] Introduce interfaces for placement constraint processing

2017-12-19 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7669:
--
Attachment: YARN-7669-YARN-6592.003.patch

Updating patch based on [~kkaranasos]'s suggestions.

bq. RejectionReason: explain what each is expected to do. BTW if all our 
constraints are soft, why would we have a could_not_place
I am thinking that should be a property of the Algorithm, not the system. We 
can have an Algorithm that assumes softness always and thus will never reject a 
request. But it should be possible to plugin a strict implementation of the 
Algorithm that rejects a SchedulingRequest if it can find a Node that exactly 
matches.

I renamed the SchedulingProposalCollector to AlgorithmOutput collector - it 
seemed to make more sense.

I also removed the SchedulingRequestHandler and SchedulingResponseHandler. Lets 
put them in as and when we need them.

[~cheersyang], Thanks for the review.

bq. ine 55: #getNodes returns list of node locations and whose size equals to 
number of allocation in the scheduling request, why not to propose some more 
nodes than asked in case some of them get rejected by scheduler?
The PlacedSchedulingRequest is a Scheduling Request for which the Algorithm was 
able to associate a node with. For each satisfied numAllocation, the Algorithm 
should add a Node to the list. We are splitting the scheduling into essentially 
2 phases: the placement phase (where we decide which node) and an allocation 
phase (where we try to actually allocate the request to the Node). The 
PlacedSchedulingRequest is the output of phase 1 - which means at that point we 
have already considered all the possible nodes.

bq. There is no set or add method for placed/rejected requests in this class.
True, I just return the whole collection. The client is free to play with it 
(Did not want to complicate the first patch)

bq. It looks like a SchedulingRequest can be either accepted or rejected, if a 
request asks for 100 containers and only 1 of them could not be allocated, it 
will be just simply rejected?
Nope. If the framwork was able to allocate 99, then the response will contain a 
single Rejected SchedulingRequest with numAllocations = 1. If you look at v006 
and v005 of the YARN-7612 patch, you can see the full workflow along with 
testcases. Let me know if it makes sense.

bq. RejectionReason: What's the purpose of this? 
I've updated with some documentation - hopefully that will help clarify 
somethings. Also please look at v005 and v006 versions of the YARN-7612 patch. 
And you can see how it is being used.

> [API] Introduce interfaces for placement constraint processing
> --
>
> Key: YARN-7669
> URL: https://issues.apache.org/jira/browse/YARN-7669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7669-YARN-6592.001.patch, 
> YARN-7669-YARN-6592.002.patch, YARN-7669-YARN-6592.003.patch
>
>
> As per discussions in YARN-7612. This JIRA will introduce the generic 
> interfaces which will be implemented in YARN-7612



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7675) The new UI won't load for pre 2.8 Hadoop versions because queueCapacitiesByPartition is missing from the scheduler API

2017-12-19 Thread JIRA
Gergely Novák created YARN-7675:
---

 Summary: The new UI won't load for pre 2.8 Hadoop versions because 
queueCapacitiesByPartition is missing from the scheduler API
 Key: YARN-7675
 URL: https://issues.apache.org/jira/browse/YARN-7675
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
Reporter: Gergely Novák


If we connect the new YARN UI to any Hadoop versions older than 2.8 it won't 
load. The console shows this trace:
{noformat}
TypeError: Cannot read property 'queueCapacitiesByPartition' of undefined
at Class.normalizeSingleResponse (yarn-ui.js:13903)
at Class.superWrapper [as normalizeSingleResponse] (vendor.js:31811)
at Class.handleQueue (yarn-ui.js:13928)
at Class.normalizeArrayResponse (yarn-ui.js:13952)
at Class.normalizeQueryResponse (vendor.js:101566)
at Class.normalizeResponse (vendor.js:101468)
at ember$data$lib$system$store$serializer$response$$normalizeResponseHelper 
(vendor.js:95345)
at vendor.js:95672
at Backburner.run (vendor.js:10426)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7032) [ATSv2] NPE while starting hbase co-processor when HBase authorization is enabled.

2017-12-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297945#comment-16297945
 ] 

Hudson commented on YARN-7032:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13408 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13408/])
YARN-7032. [ATSv2] NPE while starting hbase co-processor when HBase (sunilg: 
rev d62932c3b2fcacc81dc1f5048cdeb60fb0d38504)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/flow/FlowRunCoprocessor.java


> [ATSv2] NPE while starting hbase co-processor when HBase authorization is 
> enabled.
> --
>
> Key: YARN-7032
> URL: https://issues.apache.org/jira/browse/YARN-7032
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Fix For: 3.1.0, 2.10.0, 3.0.1
>
> Attachments: YARN-7032.01.patch, 
> hbase-yarn-regionserver-ctr-e136-1513029738776-1405-01-02.hwx.site.log
>
>
> It is seen randomly that hbase co-processor fails to start with NPE. But 
> again starting RegionServer, able to succeed in starting RS. 
> {noformat}
> 2017-08-17 05:53:13,535 ERROR 
> [RpcServer.FifoWFPBQ.priority.handler=18,queue=0,port=16020] 
> coprocessor.CoprocessorHost: The coprocessor 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor 
> threw java.lang.NullPointerException
> java.lang.NullPointerException
> at org.apache.hadoop.hbase.Tag.fromList(Tag.java:187)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor.prePut(FlowRunCoprocessor.java:102)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:885)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6592) Rich placement constraints in YARN

2017-12-19 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297925#comment-16297925
 ] 

Weiwei Yang commented on YARN-6592:
---

Thanks [~kkaranasos], is this umbrella depending on YARN-3409 (that one seems 
to be the umbrella to add node attributes)? I ams asking because I did not find 
any child tasks under this umbrella to manage node attributes and process 
constraints with respect to the attributes.

One more thing, except simple operator {{IN}} or {{NOT_IN}}, I think there are 
some more to be supported such as {{GT}} (greater than), {{GE}} (greater than 
or equal to), {{LT}} (less than), {{LE}} (less than or equal to). For example,

{code}
{target: node-attribute:diskNum GT 5, scope host}
{code}

allocate to node where its diskNum > 5. This is very useful for long running 
services.

> Rich placement constraints in YARN
> --
>
> Key: YARN-6592
> URL: https://issues.apache.org/jira/browse/YARN-6592
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Konstantinos Karanasos
> Attachments: YARN-6592-Rich-Placement-Constraints-Design-V1.pdf
>
>
> This JIRA consolidates the efforts of YARN-5468 and YARN-4902.
> It adds support for rich placement constraints to YARN, such as affinity and 
> anti-affinity between allocations within the same or across applications.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7670) Modifications to the ResourceScheduler to support SchedulingRequests

2017-12-19 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7670:
--
Fix Version/s: 3.1.0

> Modifications to the ResourceScheduler to support SchedulingRequests
> 
>
> Key: YARN-7670
> URL: https://issues.apache.org/jira/browse/YARN-7670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: 3.1.0
>
> Attachments: YARN-7670-YARN-6592.001.patch, 
> YARN-7670-YARN-6592.002.patch, YARN-7670-YARN-6592.003.patch
>
>
> As per discussions in YARN-7612. This JIRA tracks the changes to the 
> ResourceScheduler interface and implementation in CapacityScheduler to 
> support SchedulingRequests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7669) [API] Introduce interfaces for placement constraint processing

2017-12-19 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297904#comment-16297904
 ] 

Weiwei Yang commented on YARN-7669:
---

Hi [~asuresh]

I've read the design doc of this umbrella and some committed patches. I got 
some comments, could you please take a look?

*PlacedSchedulingRequest*

# line 55: #getNodes returns list of node locations and whose size equals to 
number of allocation in the scheduling request, why not to propose some more 
nodes than asked in case some of them get rejected by scheduler?

*PlacementAlgorithmOutput*

# There is no set or add method for placed/rejected requests in this class.
# It looks like a SchedulingRequest can be either accepted or rejected, if a 
request asks for 100 containers and only 1 of them could not be allocated, it 
will be just simply rejected? 

*RejectionReason*

# What's the purpose of this? Will you handle rejected requests differently 
according the reason how it was rejected, or just for prompting a message to 
client? If it is former one, I am not sure if it is good to differentiate them, 
can we just use one common logic to handle rejected requests, like reschedule? 
If it is the latter one, an enum type might not be informative, it might be 
better to have more detailed message because it might be rejected for all sorts 
of reasons.

Thanks

> [API] Introduce interfaces for placement constraint processing
> --
>
> Key: YARN-7669
> URL: https://issues.apache.org/jira/browse/YARN-7669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7669-YARN-6592.001.patch, 
> YARN-7669-YARN-6592.002.patch
>
>
> As per discussions in YARN-7612. This JIRA will introduce the generic 
> interfaces which will be implemented in YARN-7612



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7669) [API] Introduce interfaces for placement constraint processing

2017-12-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297855#comment-16297855
 ] 

Arun Suresh commented on YARN-7669:
---

Thanks for the comments [~kkaranasos]. Will update a patch shortly addressing 
most of your comments. For the remaining:

bq. PlacementAlgorithmOutput: shouldn't the rejectedRequests actually be a list 
of RejectedSchedulingRequests?
So the RejectedSchedulingRequest is a simple wrapper over the SchedulingRequest 
with a Reason. It is the job of the Processor to assign a reason. If its the 
algorithm that rejected it, it is assign it appropriately, If it the 
scheduler#commit that could not allocate the placed request, the processor 
would know and will be able to assign the correct Reason.

bq. I don't see the API for the ConstraintPlacementAlgorithm. I see an init, I 
would expect sth like a "place" etc.
So the initial version had it. But if you look at the some of the earlier 
comments in YARN-7612, think we had discussed this. YARN-7612 will have a 
subclass/implementation of this that will have a place method. It is possible 
that the per-schedulerrequest version of the Algorithm wont require a place() 
menthod.

w.r.t the SchedulingProposalCollector, SchedulingRequestHandler and 
SchedulingResponseHandler. Let me see if I can add more in the docs. But as 
with the previous comment, it was not there in the initial versions and Wangda 
had requested it be added so as to make it more friendly with the 
per-schedulerrequest placement implementaion.
I can maybe drop the SchedulingRequestHandler and SchedulingResponseHandler for 
the time being.

> [API] Introduce interfaces for placement constraint processing
> --
>
> Key: YARN-7669
> URL: https://issues.apache.org/jira/browse/YARN-7669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7669-YARN-6592.001.patch, 
> YARN-7669-YARN-6592.002.patch
>
>
> As per discussions in YARN-7612. This JIRA will introduce the generic 
> interfaces which will be implemented in YARN-7612



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7670) Modifications to the ResourceScheduler to support SchedulingRequests

2017-12-19 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297845#comment-16297845
 ] 

Konstantinos Karanasos commented on YARN-7670:
--

+1, thank [~asuresh]. I think test failure is unrelated. 
There were some tests also -- I guess you will add those as part of YARN-7612?

> Modifications to the ResourceScheduler to support SchedulingRequests
> 
>
> Key: YARN-7670
> URL: https://issues.apache.org/jira/browse/YARN-7670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7670-YARN-6592.001.patch, 
> YARN-7670-YARN-6592.002.patch, YARN-7670-YARN-6592.003.patch
>
>
> As per discussions in YARN-7612. This JIRA tracks the changes to the 
> ResourceScheduler interface and implementation in CapacityScheduler to 
> support SchedulingRequests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7670) Modifications to the ResourceScheduler to support SchedulingRequests

2017-12-19 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297845#comment-16297845
 ] 

Konstantinos Karanasos edited comment on YARN-7670 at 12/20/17 3:38 AM:


+1, thanks [~asuresh]. I think test failure is unrelated. 
There were some tests also -- I guess you will add those as part of YARN-7612?


was (Author: kkaranasos):
+1, thank [~asuresh]. I think test failure is unrelated. 
There were some tests also -- I guess you will add those as part of YARN-7612?

> Modifications to the ResourceScheduler to support SchedulingRequests
> 
>
> Key: YARN-7670
> URL: https://issues.apache.org/jira/browse/YARN-7670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7670-YARN-6592.001.patch, 
> YARN-7670-YARN-6592.002.patch, YARN-7670-YARN-6592.003.patch
>
>
> As per discussions in YARN-7612. This JIRA tracks the changes to the 
> ResourceScheduler interface and implementation in CapacityScheduler to 
> support SchedulingRequests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6592) Rich placement constraints in YARN

2017-12-19 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297841#comment-16297841
 ] 

Konstantinos Karanasos commented on YARN-6592:
--

Hi [~cheersyang]. Here is a high level description:
* Node labels are currently part of the capacity scheduler only and are coupled 
with the cluster resources. And, unless something has changed, you can have 
only one node label per node. For instance, let's say that you have blue and 
red nodes. Then, when you are defining your queues in the capacity scheduler, 
you can specify what percentage of the queue can go on red nodes and which 
percentage on blue nodes. Similarly, you can specify what percentage of the 
overall red nodes can be used by a specific queue. Overall, the important bit 
is that node labels are coupled with cluster capacity.
* Node attributes are, like you say, key-value pairs, and as such, they are 
strictly more expressive than node labels. But apart from that, they are not 
related to the clusters capacity. A request can simply say that it wants to be 
placed on a node with a specific java version or a specific generation of GPU.
* The current umbrella JIRA (YARN-6592) goes a step further by also assigning 
attributes to containers and not just nodes. So, you can say that you want your 
HBase master to be at a different node from your HBase region servers. It also 
supports more involved constraints. If you check YARN-6593, you will see that 
in the API we introduced, you are allowed to specify node attributes in 
placement constraints. This way we want to unify the node attribute and the 
container label constraints.

Hope this clarifies a bit the situation.

> Rich placement constraints in YARN
> --
>
> Key: YARN-6592
> URL: https://issues.apache.org/jira/browse/YARN-6592
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Konstantinos Karanasos
> Attachments: YARN-6592-Rich-Placement-Constraints-Design-V1.pdf
>
>
> This JIRA consolidates the efforts of YARN-5468 and YARN-4902.
> It adds support for rich placement constraints to YARN, such as affinity and 
> anti-affinity between allocations within the same or across applications.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7670) Modifications to the ResourceScheduler to support SchedulingRequests

2017-12-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297837#comment-16297837
 ] 

genericqa commented on YARN-7670:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
46s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 21s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7670 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902955/YARN-7670-YARN-6592.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 556f8396ab39 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-6592 / bf2a8cc |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18991/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18991/testReport/ |
| Max. process+thread count | 799 (vs. ulimit of 5000) |
| modules | C: 

[jira] [Commented] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-19 Thread Chandni Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297775#comment-16297775
 ] 

Chandni Singh commented on YARN-7565:
-

[~eyang] In the recovery, the service record is fetched from the Registry. 
In ComponentInstance record.description is assigned the compInstanceName.
{code}
 // Write service record into registry
  private  void updateServiceRecord(
  YarnRegistryViewForProviders yarnRegistry, ContainerStatus status) {
ServiceRecord record = new ServiceRecord();
String containerId = status.getContainerId().toString();
record.set(YARN_ID, containerId);
record.description = getCompInstanceName();
record.set(YARN_PERSISTENCE, PersistencePolicies.CONTAINER);
record.set(YARN_IP, status.getIPs().get(0));
record.set(YARN_HOSTNAME, status.getHost());
record.set(YARN_COMPONENT, component.getName());
try {
  yarnRegistry
  .putComponent(RegistryPathUtils.encodeYarnID(containerId), record);
} catch (IOException e) {
  LOG.error(
  "Failed to update service record in registry: " + containerId + "");
}
  }
{code}

> Yarn service pre-maturely releases the container after AM restart 
> --
>
> Key: YARN-7565
> URL: https://issues.apache.org/jira/browse/YARN-7565
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: 3.1.0
>
> Attachments: YARN-7565.001.patch, YARN-7565.002.patch, 
> YARN-7565.003.patch, YARN-7565.004.patch, YARN-7565.005.patch, 
> YARN-7565.addendum.001.patch
>
>
> With YARN-6168, recovered containers can be reported to AM in response to the 
> AM heartbeat. 
> Currently, the Service Master will release the containers, that are not 
> reported in the AM registration response, immediately.
> Instead, the master can wait for a configured amount of time for the 
> containers to be recovered by RM. These containers are sent to AM in the 
> heartbeat response. Once a container is not reported in the configured 
> interval, it can be released by the master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6592) Rich placement constraints in YARN

2017-12-19 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297761#comment-16297761
 ] 

Weiwei Yang commented on YARN-6592:
---

Hi [~kkaranasos], [~asuresh], [~leftnoteasy]

What's the relationship of this one to YARN-3409? I am a bit confused with 
existing node labels and new node attributes. It looks to me like that node 
labels can be represented by attributes easily, they are just boolean KV 
attributes. Attributes seemed to be more flexible. So will this one be 
depending on YARN-3409? 

> Rich placement constraints in YARN
> --
>
> Key: YARN-6592
> URL: https://issues.apache.org/jira/browse/YARN-6592
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Konstantinos Karanasos
> Attachments: YARN-6592-Rich-Placement-Constraints-Design-V1.pdf
>
>
> This JIRA consolidates the efforts of YARN-5468 and YARN-4902.
> It adds support for rich placement constraints to YARN, such as affinity and 
> anti-affinity between allocations within the same or across applications.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7670) Modifications to the ResourceScheduler to support SchedulingRequests

2017-12-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297754#comment-16297754
 ] 

Arun Suresh edited comment on YARN-7670 at 12/20/17 1:27 AM:
-

Updating patch with checkstyle fixes and added the extra warning.

[~kkaranasos], I did think of adding another {{tryCommit}} method, but decided 
not to - since I believe firstly that it is porbably better to make it explicit 
- so the caller is aware. And secondly, given the capacity scheduler is 
separated into 2 parts (Request Queuing and Actual commit/scheduling) maybe we 
should not think of it as the "default" way and keep the interfaces explicit as 
much as possible.


was (Author: asuresh):
Updating patch with checkstyle fixes.

[~kkaranasos], I did think of adding another {{tryCommit}} method, but decided 
not to - since I believe firstly that it is porbably better to make it explicit 
- so the caller is aware. And secondly, given the capacity scheduler is 
separated into 2 parts (Request Queuing and Actual commit/scheduling) maybe we 
should not think of it as the "default" way and keep the interfaces explicit as 
much as possible.

> Modifications to the ResourceScheduler to support SchedulingRequests
> 
>
> Key: YARN-7670
> URL: https://issues.apache.org/jira/browse/YARN-7670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7670-YARN-6592.001.patch, 
> YARN-7670-YARN-6592.002.patch, YARN-7670-YARN-6592.003.patch
>
>
> As per discussions in YARN-7612. This JIRA tracks the changes to the 
> ResourceScheduler interface and implementation in CapacityScheduler to 
> support SchedulingRequests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7670) Modifications to the ResourceScheduler to support SchedulingRequests

2017-12-19 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7670:
--
Attachment: YARN-7670-YARN-6592.003.patch

Updating patch with checkstyle fixes.

[~kkaranasos], I did think of adding another {{tryCommit}} method, but decided 
not to - since I believe firstly that it is porbably better to make it explicit 
- so the caller is aware. And secondly, given the capacity scheduler is 
separated into 2 parts (Request Queuing and Actual commit/scheduling) maybe we 
should not think of it as the "default" way and keep the interfaces explicit as 
much as possible.

> Modifications to the ResourceScheduler to support SchedulingRequests
> 
>
> Key: YARN-7670
> URL: https://issues.apache.org/jira/browse/YARN-7670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7670-YARN-6592.001.patch, 
> YARN-7670-YARN-6592.002.patch, YARN-7670-YARN-6592.003.patch
>
>
> As per discussions in YARN-7612. This JIRA tracks the changes to the 
> ResourceScheduler interface and implementation in CapacityScheduler to 
> support SchedulingRequests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-19 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297734#comment-16297734
 ] 

Eric Yang commented on YARN-7565:
-

[~csingh] This is happening in my development cluster.  My service spec file 
does not have description field.  Description is an optional field, and I don't 
understand how description can be useful in building up compInstance hash map.

> Yarn service pre-maturely releases the container after AM restart 
> --
>
> Key: YARN-7565
> URL: https://issues.apache.org/jira/browse/YARN-7565
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: 3.1.0
>
> Attachments: YARN-7565.001.patch, YARN-7565.002.patch, 
> YARN-7565.003.patch, YARN-7565.004.patch, YARN-7565.005.patch, 
> YARN-7565.addendum.001.patch
>
>
> With YARN-6168, recovered containers can be reported to AM in response to the 
> AM heartbeat. 
> Currently, the Service Master will release the containers, that are not 
> reported in the AM registration response, immediately.
> Instead, the master can wait for a configured amount of time for the 
> containers to be recovered by RM. These containers are sent to AM in the 
> heartbeat response. Once a container is not reported in the configured 
> interval, it can be released by the master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7466) ResourceRequest has a different default for allocationRequestId than Container

2017-12-19 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297729#comment-16297729
 ] 

Eric Yang commented on YARN-7466:
-

[~djp] I cherry-pick the addendum patch to branch 3.0.  Thanks for the heads up.

> ResourceRequest has a different default for allocationRequestId than Container
> --
>
> Key: YARN-7466
> URL: https://issues.apache.org/jira/browse/YARN-7466
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: 3.1.0, 3.0.1
>
> Attachments: YARN-7466.001.patch, YARN-7466.addendum.001.patch
>
>
> The default value of allocationRequestId is inconsistent.
> It is  -1 in {{ContainerProto}} but 0 in {{ResourceRequestProto}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7466) ResourceRequest has a different default for allocationRequestId than Container

2017-12-19 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7466:

Fix Version/s: 3.0.1

> ResourceRequest has a different default for allocationRequestId than Container
> --
>
> Key: YARN-7466
> URL: https://issues.apache.org/jira/browse/YARN-7466
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: 3.1.0, 3.0.1
>
> Attachments: YARN-7466.001.patch, YARN-7466.addendum.001.patch
>
>
> The default value of allocationRequestId is inconsistent.
> It is  -1 in {{ContainerProto}} but 0 in {{ResourceRequestProto}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-19 Thread Chandni Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297724#comment-16297724
 ] 

Chandni Singh commented on YARN-7565:
-

[~eyang] 
compInstance is being resolved from record.description (line 337 in 
ServiceScheduler) from before this change.  
Do you get this NPE from a test? 

> Yarn service pre-maturely releases the container after AM restart 
> --
>
> Key: YARN-7565
> URL: https://issues.apache.org/jira/browse/YARN-7565
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: 3.1.0
>
> Attachments: YARN-7565.001.patch, YARN-7565.002.patch, 
> YARN-7565.003.patch, YARN-7565.004.patch, YARN-7565.005.patch, 
> YARN-7565.addendum.001.patch
>
>
> With YARN-6168, recovered containers can be reported to AM in response to the 
> AM heartbeat. 
> Currently, the Service Master will release the containers, that are not 
> reported in the AM registration response, immediately.
> Instead, the master can wait for a configured amount of time for the 
> containers to be recovered by RM. These containers are sent to AM in the 
> heartbeat response. Once a container is not reported in the configured 
> interval, it can be released by the master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7565) Yarn service pre-maturely releases the container after AM restart

2017-12-19 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297713#comment-16297713
 ] 

Eric Yang commented on YARN-7565:
-

I get a null pointer exception like this:

{code}
2017-12-20 00:07:47,079 [main] INFO  service.AbstractService - Service aaa 
failed in state STARTED; cause: java.lang.NullPointerException
java.lang.NullPointerException
at 
java.util.concurrent.ConcurrentHashMap.putVal(ConcurrentHashMap.java:1011)
at 
java.util.concurrent.ConcurrentHashMap.put(ConcurrentHashMap.java:1006)
at 
org.apache.hadoop.yarn.service.ServiceScheduler.lambda$recoverComponents$0(ServiceScheduler.java:360)
at java.util.HashMap.forEach(HashMap.java:1288)
at 
org.apache.hadoop.yarn.service.ServiceScheduler.recoverComponents(ServiceScheduler.java:352)
at 
org.apache.hadoop.yarn.service.ServiceScheduler.serviceStart(ServiceScheduler.java:292)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at 
org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
at 
org.apache.hadoop.yarn.service.ServiceMaster.lambda$serviceStart$0(ServiceMaster.java:251)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
at 
org.apache.hadoop.yarn.service.ServiceMaster.serviceStart(ServiceMaster.java:249)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at 
org.apache.hadoop.yarn.service.ServiceMaster.main(ServiceMaster.java:320)
{code}

It looks like there is a problem that the code logic that compInstance is 
trying to resolve based on record.description.  Shouldn't compInstance base on 
container name?

> Yarn service pre-maturely releases the container after AM restart 
> --
>
> Key: YARN-7565
> URL: https://issues.apache.org/jira/browse/YARN-7565
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: 3.1.0
>
> Attachments: YARN-7565.001.patch, YARN-7565.002.patch, 
> YARN-7565.003.patch, YARN-7565.004.patch, YARN-7565.005.patch, 
> YARN-7565.addendum.001.patch
>
>
> With YARN-6168, recovered containers can be reported to AM in response to the 
> AM heartbeat. 
> Currently, the Service Master will release the containers, that are not 
> reported in the AM registration response, immediately.
> Instead, the master can wait for a configured amount of time for the 
> containers to be recovered by RM. These containers are sent to AM in the 
> heartbeat response. Once a container is not reported in the configured 
> interval, it can be released by the master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7616) App status does not return state STABLE for a running and stable service

2017-12-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297705#comment-16297705
 ] 

Hudson commented on YARN-7616:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13407 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13407/])
YARN-7616. Map YARN application status to Service Status more (eyang: rev 
41b581012a83a17db785343362c718363e13e8f5)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/Component.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestYarnNativeServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/instance/ComponentInstance.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceMaster.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java


> App status does not return state STABLE for a running and stable service
> 
>
> Key: YARN-7616
> URL: https://issues.apache.org/jira/browse/YARN-7616
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Attachments: YARN-7616.001.patch, YARN-7616.002.patch, 
> YARN-7616.003.patch
>
>
> state currently returns null for a running and stable service. Looks like the 
> code does not return ServiceState.STABLE under any circumstance. Will need to 
> wire this in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7616) App status does not return state STABLE for a running and stable service

2017-12-19 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297691#comment-16297691
 ] 

Gour Saha commented on YARN-7616:
-

Thanks a lot [~eyang]

> App status does not return state STABLE for a running and stable service
> 
>
> Key: YARN-7616
> URL: https://issues.apache.org/jira/browse/YARN-7616
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Attachments: YARN-7616.001.patch, YARN-7616.002.patch, 
> YARN-7616.003.patch
>
>
> state currently returns null for a running and stable service. Looks like the 
> code does not return ServiceState.STABLE under any circumstance. Will need to 
> wire this in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7616) App status does not return state STABLE for a running and stable service

2017-12-19 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297677#comment-16297677
 ] 

Eric Yang commented on YARN-7616:
-

+1 Status of service is reflected more accurately with this patch.

> App status does not return state STABLE for a running and stable service
> 
>
> Key: YARN-7616
> URL: https://issues.apache.org/jira/browse/YARN-7616
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Attachments: YARN-7616.001.patch, YARN-7616.002.patch, 
> YARN-7616.003.patch
>
>
> state currently returns null for a running and stable service. Looks like the 
> code does not return ServiceState.STABLE under any circumstance. Will need to 
> wire this in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7466) ResourceRequest has a different default for allocationRequestId than Container

2017-12-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297668#comment-16297668
 ] 

Hudson commented on YARN-7466:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13406 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13406/])
YARN-7466.  addendum patch for failing unit test.  (Contributed by (eyang: rev 
94a2ac6b719913aa698b66bf40b7ebbe6fa606da)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java


> ResourceRequest has a different default for allocationRequestId than Container
> --
>
> Key: YARN-7466
> URL: https://issues.apache.org/jira/browse/YARN-7466
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: 3.1.0
>
> Attachments: YARN-7466.001.patch, YARN-7466.addendum.001.patch
>
>
> The default value of allocationRequestId is inconsistent.
> It is  -1 in {{ContainerProto}} but 0 in {{ResourceRequestProto}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7622) Allow fair-scheduler configuration on HDFS

2017-12-19 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297666#comment-16297666
 ] 

Wilfred Spiegelenburg commented on YARN-7622:
-

thanks for all the work and working through the review 
+1 (non binding)

> Allow fair-scheduler configuration on HDFS
> --
>
> Key: YARN-7622
> URL: https://issues.apache.org/jira/browse/YARN-7622
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, resourcemanager
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
> Attachments: YARN-7622.001.patch, YARN-7622.002.patch, 
> YARN-7622.003.patch, YARN-7622.004.patch, YARN-7622.005.patch
>
>
> The FairScheduler requires the allocation file to be hosted on the local 
> filesystem on the RM node(s). Allowing HDFS to store the allocation file will 
> provide improved redundancy, more options for scheduler updates, and RM 
> failover consistency in HA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7466) ResourceRequest has a different default for allocationRequestId than Container

2017-12-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297663#comment-16297663
 ] 

genericqa commented on YARN-7466:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m  5s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7466 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902937/YARN-7466.addendum.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 81cd0a61356c 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 989c751 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18989/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18989/testReport/ |
| Max. process+thread count | 879 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-7466) ResourceRequest has a different default for allocationRequestId than Container

2017-12-19 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297652#comment-16297652
 ] 

Junping Du commented on YARN-7466:
--

Hi [~jianhe] and [~eyang], this problem seems to be generic to branch-3.0 as 
well. Shall we commit them to branch-3.0 and get released in 3.0.1?

> ResourceRequest has a different default for allocationRequestId than Container
> --
>
> Key: YARN-7466
> URL: https://issues.apache.org/jira/browse/YARN-7466
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: 3.1.0
>
> Attachments: YARN-7466.001.patch, YARN-7466.addendum.001.patch
>
>
> The default value of allocationRequestId is inconsistent.
> It is  -1 in {{ContainerProto}} but 0 in {{ResourceRequestProto}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7507) TestNodeLabelContainerAllocation failing in trunk

2017-12-19 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved YARN-7507.
-
Resolution: Duplicate

Fixed by addendum patch for YARN-7466.

> TestNodeLabelContainerAllocation failing in trunk
> -
>
> Key: YARN-7507
> URL: https://issues.apache.org/jira/browse/YARN-7507
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>
>   
> https://builds.apache.org/job/PreCommit-YARN-Build/18498/testReport/
> {code}  
> TestNodeLabelContainerAllocation.testPreferenceOfNeedyPrioritiesUnderSameAppTowardsNodePartitions:786->checkPendingResource:557
>  expected:<1024> but was:<0>
>   
> TestNodeLabelContainerAllocation.testPreferenceOfQueuesTowardsNodePartitions:985->checkPendingResource:557
>  expected:<5120> but was:<0>
>   TestNodeLabelContainerAllocation.testQueueMetricsWithLabels:1962 
> expected:<0> but was:<1024>
>   
> TestNodeLabelContainerAllocation.testQueueMetricsWithLabelsOnDefaultLabelNode:2065
>  expected:<1024> but was:<2048>
> {code}  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7559) TestNodeLabelContainerAllocation failing intermittently

2017-12-19 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved YARN-7559.
-
Resolution: Duplicate

Fixed by addendum patch for YARN-7466.

> TestNodeLabelContainerAllocation failing intermittently
> ---
>
> Key: YARN-7559
> URL: https://issues.apache.org/jira/browse/YARN-7559
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7670) Modifications to the ResourceScheduler to support SchedulingRequests

2017-12-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297647#comment-16297647
 ] 

Arun Suresh commented on YARN-7670:
---

Thanks for the review [~kkaranasos]. Sure, ill add a WARN if numAllocations > 1

> Modifications to the ResourceScheduler to support SchedulingRequests
> 
>
> Key: YARN-7670
> URL: https://issues.apache.org/jira/browse/YARN-7670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7670-YARN-6592.001.patch, 
> YARN-7670-YARN-6592.002.patch
>
>
> As per discussions in YARN-7612. This JIRA tracks the changes to the 
> ResourceScheduler interface and implementation in CapacityScheduler to 
> support SchedulingRequests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7673) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster

2017-12-19 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297637#comment-16297637
 ] 

Junping Du commented on YARN-7673:
--

I think we could miss adding yarn-server-api into dependency of 
hadoop-client-minicluster. [~busbey] and [~bharatviswa], I think we should add 
it. Thoughts?

> ClassNotFoundException: 
> org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using 
> hadoop-client-minicluster
> --
>
> Key: YARN-7673
> URL: https://issues.apache.org/jira/browse/YARN-7673
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jeff Zhang
>
> I'd like to use hadoop-client-minicluster for hadoop downstream project, but 
> I encounter the following exception when starting hadoop minicluster.  And I 
> check the hadoop-client-minicluster, it indeed does not have this class. Is 
> this something that is missing when packaging the published jar ?
> {code}
> java.lang.NoClassDefFoundError: 
> org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7616) App status does not return state STABLE for a running and stable service

2017-12-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297566#comment-16297566
 ] 

genericqa commented on YARN-7616:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core:
 The patch generated 0 new + 83 unchanged - 2 fixed = 83 total (was 85) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
44s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7616 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902940/YARN-7616.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7f40fc303e78 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 989c751 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18990/testReport/ |
| Max. process+thread count | 608 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 |
| Console output | 

[jira] [Comment Edited] (YARN-7616) App status does not return state STABLE for a running and stable service

2017-12-19 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297520#comment-16297520
 ] 

Gour Saha edited comment on YARN-7616 at 12/19/17 10:15 PM:


Uploading 003 patch with checkstyle issue reported in 002 fixed.


was (Author: gsaha):
Fixed the checkstyle issue

> App status does not return state STABLE for a running and stable service
> 
>
> Key: YARN-7616
> URL: https://issues.apache.org/jira/browse/YARN-7616
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Attachments: YARN-7616.001.patch, YARN-7616.002.patch, 
> YARN-7616.003.patch
>
>
> state currently returns null for a running and stable service. Looks like the 
> code does not return ServiceState.STABLE under any circumstance. Will need to 
> wire this in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7616) App status does not return state STABLE for a running and stable service

2017-12-19 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-7616:

Attachment: YARN-7616.003.patch

Fixed the checkstyle issue

> App status does not return state STABLE for a running and stable service
> 
>
> Key: YARN-7616
> URL: https://issues.apache.org/jira/browse/YARN-7616
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Attachments: YARN-7616.001.patch, YARN-7616.002.patch, 
> YARN-7616.003.patch
>
>
> state currently returns null for a running and stable service. Looks like the 
> code does not return ServiceState.STABLE under any circumstance. Will need to 
> wire this in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7543) FileNotFoundException due to a broken link when creating a yarn service and missing max cpu limit check

2017-12-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297511#comment-16297511
 ] 

Hudson commented on YARN-7543:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13405 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13405/])
YARN-7543.  Add check for max cpu limit and missing file for YARN (eyang: rev 
989c75109a619deeaee7461864e7cb3c289c9421)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/ServiceUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/ServiceApiUtil.java


> FileNotFoundException due to a broken link when creating a yarn service and 
> missing max cpu limit check
> ---
>
> Key: YARN-7543
> URL: https://issues.apache.org/jira/browse/YARN-7543
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Jian He
> Fix For: yarn-native-services
>
> Attachments: YARN-7543.01.patch
>
>
> The hadoop lib dir had a broken link to a ojdb jar which was not really 
> required for a YARN service creation. The app submission failed with the 
> below FNFE. Ideally it should be handled and app should be successfully 
> submitted and let the app fail if it really needed the jar of the broken link 
> -
> {code}
> [root@ctr-e134-1499953498516-324910-01-02 ~]# yarn app -launch 
> gour-sleeper sleeper
> WARNING: YARN_LOG_DIR has been replaced by HADOOP_LOG_DIR. Using value of 
> YARN_LOG_DIR.
> WARNING: YARN_LOGFILE has been replaced by HADOOP_LOGFILE. Using value of 
> YARN_LOGFILE.
> WARNING: YARN_PID_DIR has been replaced by HADOOP_PID_DIR. Using value of 
> YARN_PID_DIR.
> WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS.
> 17/11/21 03:21:58 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 17/11/21 03:21:59 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
> 17/11/21 03:22:00 WARN shortcircuit.DomainSocketFactory: The short-circuit 
> local reads feature cannot be used because libhadoop cannot be loaded.
> 17/11/21 03:22:00 INFO client.RMProxy: Connecting to ResourceManager at 
> ctr-e134-1499953498516-324910-01-03.example.com/172.27.47.1:8050
> 17/11/21 03:22:00 INFO client.ServiceClient: Loading service definition from 
> local FS: 
> /usr/hdp/3.0.0.0-493/hadoop-yarn/yarn-service-examples/sleeper/sleeper.json
> 17/11/21 03:22:01 INFO client.ServiceClient: Persisted service gour-sleeper 
> at 
> hdfs://ctr-e134-1499953498516-324910-01-03.example.com:8020/user/hdfs/.yarn/services/gour-sleeper/gour-sleeper.json
> 17/11/21 03:22:01 INFO conf.Configuration: resource-types.xml not found
> 17/11/21 03:22:01 WARN client.ServiceClient: AM log4j property file doesn't 
> exist: /usr/hdp/3.0.0.0-493/hadoop/conf/yarnservice-log4j.properties
> 17/11/21 03:22:01 INFO client.ServiceClient: Uploading all dependency jars to 
> HDFS. For faster submission of apps, pre-upload dependency jars to HDFS using 
> command: yarn app -enableFastLaunch
> Exception in thread "main" java.io.FileNotFoundException: File 
> /usr/hdp/3.0.0.0-493/hadoop/lib/ojdbc6.jar does not exist
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
>   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:365)
>   at 
> org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2399)
>   at 
> org.apache.hadoop.yarn.service.utils.CoreFileSystem.submitFile(CoreFileSystem.java:434)
>   at 
> org.apache.hadoop.yarn.service.utils.ServiceUtils.putAllJars(ServiceUtils.java:409)
>   at 
> org.apache.hadoop.yarn.service.provider.ProviderUtils.addAllDependencyJars(ProviderUtils.java:138)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.addJarResource(ServiceClient.java:695)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.submitApp(ServiceClient.java:553)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionCreate(ServiceClient.java:212)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionLaunch(ServiceClient.java:197)
>   at 
> 

[jira] [Updated] (YARN-7466) ResourceRequest has a different default for allocationRequestId than Container

2017-12-19 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-7466:

Attachment: YARN-7466.addendum.001.patch

Fixed the RM failing TestNodeLabelContainerAllocation tests in the addendum 
patch.

> ResourceRequest has a different default for allocationRequestId than Container
> --
>
> Key: YARN-7466
> URL: https://issues.apache.org/jira/browse/YARN-7466
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: 3.1.0
>
> Attachments: YARN-7466.001.patch, YARN-7466.addendum.001.patch
>
>
> The default value of allocationRequestId is inconsistent.
> It is  -1 in {{ContainerProto}} but 0 in {{ResourceRequestProto}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-7466) ResourceRequest has a different default for allocationRequestId than Container

2017-12-19 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh reopened YARN-7466:
-

> ResourceRequest has a different default for allocationRequestId than Container
> --
>
> Key: YARN-7466
> URL: https://issues.apache.org/jira/browse/YARN-7466
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: 3.1.0
>
> Attachments: YARN-7466.001.patch
>
>
> The default value of allocationRequestId is inconsistent.
> It is  -1 in {{ContainerProto}} but 0 in {{ResourceRequestProto}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7346) Fix compilation errors against hbase2 alpha release

2017-12-19 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297469#comment-16297469
 ] 

Ted Yu commented on YARN-7346:
--

HBASE-19112 has been integrated.

See if rebase is needed for using the new hbase API.

> Fix compilation errors against hbase2 alpha release
> ---
>
> Key: YARN-7346
> URL: https://issues.apache.org/jira/browse/YARN-7346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Vrushali C
> Attachments: YARN-7346.00.patch, YARN-7346.prelim1.patch, 
> YARN-7346.prelim2.patch, YARN-7581.prelim.patch
>
>
> When compiling hadoop-yarn-server-timelineservice-hbase against 2.0.0-alpha3, 
> I got the following errors:
> https://pastebin.com/Ms4jYEVB
> This issue is to fix the compilation errors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5366) Improve handling of the Docker container life cycle

2017-12-19 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297452#comment-16297452
 ] 

Eric Yang commented on YARN-5366:
-

The current implementation seems to work like this:

#  Generate application data files in local directory.
#  Write a launch_container.sh script in local directory.
#  Launch_container.sh script contains instructions of how to mount all local 
resources to docker container.
#  Launch docker run with bootstrap script.

Container deletion service

# Remove local directory and docker container instance. 

The current implementation is heavily depending on resource in local directory. 
 There is additional delay for generating per container resource, and container 
will not be usable if launch_container.sh is removed.  If container debug is 
enabled, and container is in stop state.  There is no guarantee that we can 
restart the container using docker start command to look inside the container.  
It would be better to pass environment variables to docker run command than 
running the bash script post docker instance construction.  This will ensure 
that changes to the launch_container.sh does not have influence to restart 
docker instance.  This can strengthen ability to debug without worry about 
possible loopholes to prevent debug.


> Improve handling of the Docker container life cycle
> ---
>
> Key: YARN-5366
> URL: https://issues.apache.org/jira/browse/YARN-5366
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>  Labels: oct16-medium
> Attachments: YARN-5366.001.patch, YARN-5366.002.patch, 
> YARN-5366.003.patch, YARN-5366.004.patch, YARN-5366.005.patch, 
> YARN-5366.006.patch, YARN-5366.007.patch, YARN-5366.008.patch
>
>
> There are several paths that need to be improved with regard to the Docker 
> container lifecycle when running Docker containers on YARN.
> 1) Provide the ability to keep a container on the NodeManager for a set 
> period of time for debugging purposes.
> 2) Support sending signals to the process in the container to allow for 
> triggering stack traces, heap dumps, etc.
> 3) Support for Docker's live restore, which means moving away from the use of 
> {{docker wait}}. (YARN-5818)
> 4) Improve the resiliency of liveliness checks (kill -0) by adding retries.
> 5) Improve the resiliency of container removal by adding retries.
> 6) Only attempt to stop, kill, and remove containers if the current container 
> state allows for it.
> 7) Better handling of short lived containers when the container is stopped 
> before the PID can be retrieved. (YARN-6305)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7669) [API] Introduce interfaces for placement constraint processing

2017-12-19 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297395#comment-16297395
 ] 

Konstantinos Karanasos commented on YARN-7669:
--

Some more comments on this part of the code (in order they appeared in the 
patch, not order of importance):
* ApplicationMasterServiceUtils: 
** A few changes are not needed (it is just different checkstyle)
** "Add rejected Scheduling *Requests*"
* AllocateResponse: "Add a list of *rejected* scheduling requests to the 
allocate response."
* RejectedSchedulingRequest: you can explain in the javadoc in the beginning of 
the class what rejection means. Like during scheduling/placement, is it only 
because of constraint violation, etc.?
* RejectionReason: explain what each is expected to do. BTW if all our 
constraints are soft, why would we have a could_not_place?
* PlacemedSchedulingRequest: explain what the placementAttempt is; also there 
is a double instead in a javadoc.
* PlacementAlgorithmOutput: shouldn't the rejectedRequests actually be a list 
of RejectedSchedulingRequests?
* I don't see the API for the ConstraintPlacementAlgorithm. I see an init, I 
would expect sth like a "place" etc.
* I am not sure I understand what is the purpose of the 
SchedulingProposalCollector. The comments don't help either.
* Do we need the SchedulingRequestHandler? Same for SchedulingResponseHandler. 
Look like an overkill. Maybe at least keep them as inner classes?
* SchedulingResponse: say in the javadoc what it will be doing and where it 
will be used.

> [API] Introduce interfaces for placement constraint processing
> --
>
> Key: YARN-7669
> URL: https://issues.apache.org/jira/browse/YARN-7669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7669-YARN-6592.001.patch, 
> YARN-7669-YARN-6592.002.patch
>
>
> As per discussions in YARN-7612. This JIRA will introduce the generic 
> interfaces which will be implemented in YARN-7612



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7670) Modifications to the ResourceScheduler to support SchedulingRequests

2017-12-19 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297377#comment-16297377
 ] 

Konstantinos Karanasos commented on YARN-7670:
--

Also, one more thing:
bq. I have mentioned in the javadoc for the attemptAllocationOnNode method that 
the numContainers will be ignored and the Scheduler will only try to allocate a 
single container on the requested node.
I know, but isn't it better to throw at least a warning, so that people know 
they are using it wrong? Or you assume it is okay to have multiple containers 
in the request and we place only one?

> Modifications to the ResourceScheduler to support SchedulingRequests
> 
>
> Key: YARN-7670
> URL: https://issues.apache.org/jira/browse/YARN-7670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7670-YARN-6592.001.patch, 
> YARN-7670-YARN-6592.002.patch
>
>
> As per discussions in YARN-7612. This JIRA tracks the changes to the 
> ResourceScheduler interface and implementation in CapacityScheduler to 
> support SchedulingRequests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7616) App status does not return state STABLE for a running and stable service

2017-12-19 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-7616:

Fix Version/s: (was: yarn-native-services)

> App status does not return state STABLE for a running and stable service
> 
>
> Key: YARN-7616
> URL: https://issues.apache.org/jira/browse/YARN-7616
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Attachments: YARN-7616.001.patch, YARN-7616.002.patch
>
>
> state currently returns null for a running and stable service. Looks like the 
> code does not return ServiceState.STABLE under any circumstance. Will need to 
> wire this in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7670) Modifications to the ResourceScheduler to support SchedulingRequests

2017-12-19 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297353#comment-16297353
 ] 

Konstantinos Karanasos commented on YARN-7670:
--

Thanks [~asuresh].
I prefer the boolean you added to the tryCommit in this version of the patch.
I would keep a version of the tryCommit without the boolean (calling the new 
one with true) to make clear what the default way of using it should be.
+1, once you do that and fix the checkstyle/unit test issues.

> Modifications to the ResourceScheduler to support SchedulingRequests
> 
>
> Key: YARN-7670
> URL: https://issues.apache.org/jira/browse/YARN-7670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7670-YARN-6592.001.patch, 
> YARN-7670-YARN-6592.002.patch
>
>
> As per discussions in YARN-7612. This JIRA tracks the changes to the 
> ResourceScheduler interface and implementation in CapacityScheduler to 
> support SchedulingRequests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7512) Support service upgrade via YARN Service API and CLI

2017-12-19 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297322#comment-16297322
 ] 

Gour Saha commented on YARN-7512:
-

Needs the version field as per YARN-7523 Introduce description and version 
field in Service record

> Support service upgrade via YARN Service API and CLI
> 
>
> Key: YARN-7512
> URL: https://issues.apache.org/jira/browse/YARN-7512
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Chandni Singh
> Fix For: yarn-native-services
>
>
> YARN Service API and CLI needs to support service (and containers) upgrade in 
> line with what Slider supported in SLIDER-787 
> (http://slider.incubator.apache.org/docs/slider_specs/application_pkg_upgrade.html)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7670) Modifications to the ResourceScheduler to support SchedulingRequests

2017-12-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297278#comment-16297278
 ] 

genericqa commented on YARN-7670:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
19s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 6 new + 130 unchanged - 0 fixed = 136 total (was 130) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 46s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 54s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7670 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902895/YARN-7670-YARN-6592.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux eeacdd54c389 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-6592 / bf2a8cc |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18988/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Commented] (YARN-7669) [API] Introduce interfaces for placement constraint processing

2017-12-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297248#comment-16297248
 ] 

genericqa commented on YARN-7669:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
12s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
29s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
15s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-6592 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 59s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 15 new + 279 unchanged - 1 fixed = 294 total (was 280) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
2s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 28s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7669 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902890/YARN-7669-YARN-6592.002.patch
 |
| Optional Tests |  

[jira] [Commented] (YARN-7672) hadoop-sls can not simulate huge scale of YARN

2017-12-19 Thread Wei Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297186#comment-16297186
 ] 

Wei Chen commented on YARN-7672:


It is an interesting point. So you mean you will set up like two daemons,
one for RM(acts for resource allocation) and the other for NMs(each thread
represents for a NM ?).  I am also working on a large scale cluster
simulation project now. I am interested in your configuration for your
simulation(machine capacity, machine load) and your bottleneck (like if the
performance is bottlenecked by CPU or the lock-contention or the other
resources)???


Wei Chen

On Mon, Dec 18, 2017 at 11:54 PM, zhangshilong (JIRA) 



> hadoop-sls can not simulate huge scale of YARN
> --
>
> Key: YARN-7672
> URL: https://issues.apache.org/jira/browse/YARN-7672
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: zhangshilong
>Assignee: zhangshilong
> Attachments: YARN-7672.patch
>
>
> Our YARN cluster scale to nearly 10 thousands nodes. We need to do scheduler 
> pressure test.
> Using SLS,we start  2000+ threads to simulate NM and AM. But  cpu.load very 
> high to 100+. I thought that will affect  performance evaluation of 
> scheduler. 
> So I thought to separate the scheduler from the simulator.
> I start a real RM. Then SLS will register nodes to RM,And submit apps to RM 
> using RM RPC.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7662) [Atsv2] Define new set of configurations for reader and collectors to bind.

2017-12-19 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297146#comment-16297146
 ] 

Rohith Sharma K S commented on YARN-7662:
-

We can put it there as well since it does not break compatibility!

> [Atsv2] Define new set of configurations for reader and collectors to bind.
> ---
>
> Key: YARN-7662
> URL: https://issues.apache.org/jira/browse/YARN-7662
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 3.1.0, 2.10.0, 3.0.1
>
> Attachments: YARN-7662.01.patch, YARN-7662.01.patch, 
> YARN-7662.02.patch, YARN-7662.03.patch, YARN-7662.04.patch, YARN-7662.05.patch
>
>
> While starting Timeline Reader in secure mode, login happens using 
> timeline.service.address even though timeline.service.bindhost is configured 
> with 0.0.0.0. This causes exact principal name that matches address name to 
> be present in keytabs. 
> It is always better to login using getLocalHost that gives machine hostname 
> which is configured in /etc/hosts unlike NodeManager does in serviceStart. 
> And timeline.service.address is not required in non-secure mode, so its 
> better to keep consistent in secure and non-secure mode



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7662) [Atsv2] Define new set of configurations for reader and collectors to bind.

2017-12-19 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297130#comment-16297130
 ] 

Varun Saxena commented on YARN-7662:


Minor changes made in description of reader bind host configuration in 
yarn-default.xml before committing. Also removed a class member variable 
webAppURLWithoutScheme from NodeTimelineCollectorManager as it was used inside 
only one method. So converted it into a local variable within the method.


> [Atsv2] Define new set of configurations for reader and collectors to bind.
> ---
>
> Key: YARN-7662
> URL: https://issues.apache.org/jira/browse/YARN-7662
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-7662.01.patch, YARN-7662.01.patch, 
> YARN-7662.02.patch, YARN-7662.03.patch, YARN-7662.04.patch, YARN-7662.05.patch
>
>
> While starting Timeline Reader in secure mode, login happens using 
> timeline.service.address even though timeline.service.bindhost is configured 
> with 0.0.0.0. This causes exact principal name that matches address name to 
> be present in keytabs. 
> It is always better to login using getLocalHost that gives machine hostname 
> which is configured in /etc/hosts unlike NodeManager does in serviceStart. 
> And timeline.service.address is not required in non-secure mode, so its 
> better to keep consistent in secure and non-secure mode



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7662) [Atsv2] Define new set of configurations for reader and collectors to bind.

2017-12-19 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297132#comment-16297132
 ] 

Varun Saxena commented on YARN-7662:


Committed to trunk, branch-3.0 and branch-2. Does this need to go into 
branch-2.9?

> [Atsv2] Define new set of configurations for reader and collectors to bind.
> ---
>
> Key: YARN-7662
> URL: https://issues.apache.org/jira/browse/YARN-7662
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-7662.01.patch, YARN-7662.01.patch, 
> YARN-7662.02.patch, YARN-7662.03.patch, YARN-7662.04.patch, YARN-7662.05.patch
>
>
> While starting Timeline Reader in secure mode, login happens using 
> timeline.service.address even though timeline.service.bindhost is configured 
> with 0.0.0.0. This causes exact principal name that matches address name to 
> be present in keytabs. 
> It is always better to login using getLocalHost that gives machine hostname 
> which is configured in /etc/hosts unlike NodeManager does in serviceStart. 
> And timeline.service.address is not required in non-secure mode, so its 
> better to keep consistent in secure and non-secure mode



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7662) [Atsv2] Define new set of configurations for reader and collectors to bind.

2017-12-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297118#comment-16297118
 ] 

Hudson commented on YARN-7662:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13404 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13404/])
YARN-7662. [ATSv2] Define new set of configurations for reader and 
(varunsaxena: rev c0aeb666a4d43aac196569d9ec6768d62139d2b9)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/AbstractTimelineReaderHBaseTestBase.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderServer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineReaderServer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/security/TestTimelineAuthFilterForV2.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/NodeTimelineCollectorManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/util/WebAppUtils.java


> [Atsv2] Define new set of configurations for reader and collectors to bind.
> ---
>
> Key: YARN-7662
> URL: https://issues.apache.org/jira/browse/YARN-7662
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-7662.01.patch, YARN-7662.01.patch, 
> YARN-7662.02.patch, YARN-7662.03.patch, YARN-7662.04.patch, YARN-7662.05.patch
>
>
> While starting Timeline Reader in secure mode, login happens using 
> timeline.service.address even though timeline.service.bindhost is configured 
> with 0.0.0.0. This causes exact principal name that matches address name to 
> be present in keytabs. 
> It is always better to login using getLocalHost that gives machine hostname 
> which is configured in /etc/hosts unlike NodeManager does in serviceStart. 
> And timeline.service.address is not required in non-secure mode, so its 
> better to keep consistent in secure and non-secure mode



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7670) Modifications to the ResourceScheduler to support SchedulingRequests

2017-12-19 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7670:
--
Attachment: YARN-7670-YARN-6592.002.patch

Updating patch based on [~kkaranasos]'s suggestions.

bq. Can we unify the two createResourceCommitRequest in the CapacityScheduler 
that seem to duplicate a lot of code?
Tried this, but it turns out to be far more messy, since we have to create an 
CSAssignment for which we don't have support for SchedulingRequests. One this 
is incorporated, we can probably merge both.

bq. In the createResourceCommitRequest, since it assumes we request a single 
container in the SchedulingRequest, shouldn't we add a check for that?
I have mentioned in the javadoc for the {{attemptAllocationOnNode}} method that 
the numContainers will be ignored and the Scheduler will only try to allocate a 
single container on the requested node.

> Modifications to the ResourceScheduler to support SchedulingRequests
> 
>
> Key: YARN-7670
> URL: https://issues.apache.org/jira/browse/YARN-7670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7670-YARN-6592.001.patch, 
> YARN-7670-YARN-6592.002.patch
>
>
> As per discussions in YARN-7612. This JIRA tracks the changes to the 
> ResourceScheduler interface and implementation in CapacityScheduler to 
> support SchedulingRequests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7622) Allow fair-scheduler configuration on HDFS

2017-12-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297038#comment-16297038
 ] 

genericqa commented on YARN-7622:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 39 unchanged - 2 fixed = 39 total (was 41) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 32s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 59s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector 
|
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7622 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902876/YARN-7622.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a7538bc20231 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e040c97 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/18986/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 

[jira] [Created] (YARN-7674) Update Timeline Reader web app address in UI2

2017-12-19 Thread Rohith Sharma K S (JIRA)
Rohith Sharma K S created YARN-7674:
---

 Summary: Update Timeline Reader web app address in UI2
 Key: YARN-7674
 URL: https://issues.apache.org/jira/browse/YARN-7674
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Rohith Sharma K S
Assignee: Sunil G


YARN-7662 introduces a new set of configurations. It is required to update in 
UI2 as well. 
cc :/ [~sunilg]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7669) [API] Introduce interfaces for placement constraint processing

2017-12-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297030#comment-16297030
 ] 

Arun Suresh edited comment on YARN-7669 at 12/19/17 4:21 PM:
-

Updating patch based on [~kkaranasos]'s suggestions.

bq. In the ApplicationMasterServiceUtils, I would put the 
setRejectedSchedulingRequests inside the first if clause, assuming most 
responses will not have a rejection.
Actually, the method itself is called, only if there are rejections - which 
means inside the method, we can be sure there is a rejection. Also, lets keep 
it consistent with the remaining methods.


was (Author: asuresh):
Updating patch based on [~kkaranasos]'s suggestions.

bq. In the ApplicationMasterServiceUtils, I would put the 
setRejectedSchedulingRequests inside the first if clause, assuming most 
responses will not have a rejection.
Actually, if the method itself is called, only if there are rejections - which 
means inside the method, we can be sure there is a rejection. Also, lets keep 
it consistent with the remaining methods.

> [API] Introduce interfaces for placement constraint processing
> --
>
> Key: YARN-7669
> URL: https://issues.apache.org/jira/browse/YARN-7669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7669-YARN-6592.001.patch, 
> YARN-7669-YARN-6592.002.patch
>
>
> As per discussions in YARN-7612. This JIRA will introduce the generic 
> interfaces which will be implemented in YARN-7612



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7669) [API] Introduce interfaces for placement constraint processing

2017-12-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297030#comment-16297030
 ] 

Arun Suresh edited comment on YARN-7669 at 12/19/17 4:21 PM:
-

Updating patch based on [~kkaranasos]'s suggestions.

bq. In the ApplicationMasterServiceUtils, I would put the 
setRejectedSchedulingRequests inside the first if clause, assuming most 
responses will not have a rejection.
Actually, if the method itself is called, only if there are rejections - which 
means inside the method, we can be sure there is a rejection. Also, lets keep 
it consistent with the remaining methods.


was (Author: asuresh):
Updating patch based on [~kkaranasos]'s suggestion

> [API] Introduce interfaces for placement constraint processing
> --
>
> Key: YARN-7669
> URL: https://issues.apache.org/jira/browse/YARN-7669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7669-YARN-6592.001.patch, 
> YARN-7669-YARN-6592.002.patch
>
>
> As per discussions in YARN-7612. This JIRA will introduce the generic 
> interfaces which will be implemented in YARN-7612



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7669) [API] Introduce interfaces for placement constraint processing

2017-12-19 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7669:
--
Attachment: YARN-7669-YARN-6592.002.patch

Updating patch based on [~kkaranasos]'s suggestion

> [API] Introduce interfaces for placement constraint processing
> --
>
> Key: YARN-7669
> URL: https://issues.apache.org/jira/browse/YARN-7669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7669-YARN-6592.001.patch, 
> YARN-7669-YARN-6592.002.patch
>
>
> As per discussions in YARN-7612. This JIRA will introduce the generic 
> interfaces which will be implemented in YARN-7612



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7662) [Atsv2] Define new set of configurations for reader and collectors to bind.

2017-12-19 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16297008#comment-16297008
 ] 

Varun Saxena commented on YARN-7662:


Will commit it shortly

> [Atsv2] Define new set of configurations for reader and collectors to bind.
> ---
>
> Key: YARN-7662
> URL: https://issues.apache.org/jira/browse/YARN-7662
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-7662.01.patch, YARN-7662.01.patch, 
> YARN-7662.02.patch, YARN-7662.03.patch, YARN-7662.04.patch, YARN-7662.05.patch
>
>
> While starting Timeline Reader in secure mode, login happens using 
> timeline.service.address even though timeline.service.bindhost is configured 
> with 0.0.0.0. This causes exact principal name that matches address name to 
> be present in keytabs. 
> It is always better to login using getLocalHost that gives machine hostname 
> which is configured in /etc/hosts unlike NodeManager does in serviceStart. 
> And timeline.service.address is not required in non-secure mode, so its 
> better to keep consistent in secure and non-secure mode



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7612) Add Placement Processor Framework

2017-12-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296995#comment-16296995
 ] 

Arun Suresh commented on YARN-7612:
---

[~kkaranasos],
bq. At the moment we have resourcemanager/placement, 
resourcemanager/scheduler/placement, and resourcemanager/scheduler/constraint.
Had discussed offline with wangda:
* resourcemanager/placement is for application to queue placement.
* resourcemanager/scheduler/placement deals with scheduler classes that deal 
with the above
* resourcemanager/scheduler/constrain is for constraint placement.

> Add Placement Processor Framework
> -
>
> Key: YARN-7612
> URL: https://issues.apache.org/jira/browse/YARN-7612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-7612-YARN-6592.001.patch, 
> YARN-7612-YARN-6592.002.patch, YARN-7612-YARN-6592.003.patch, 
> YARN-7612-YARN-6592.004.patch, YARN-7612-YARN-6592.005.patch, 
> YARN-7612-YARN-6592.006.patch, YARN-7612-YARN-6592.007.patch, 
> YARN-7612-v2.wip.patch, YARN-7612.wip.patch
>
>
> This introduces a Placement Processor and a Planning algorithm framework to 
> handle placement constraints and scheduling requests from an app and places 
> them on nodes.
> The actual planning algorithm(s) will be handled in a YARN-7613.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7620) Allow node partition filters on Queues page of new YARN UI

2017-12-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296940#comment-16296940
 ] 

Hudson commented on YARN-7620:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13403 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13403/])
YARN-7620. Allow node partition filters on Queues page of new YARN UI. (sunilg: 
rev fe5b057c8144d01ef9fdfb2639a2cba97ead8144)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/yarn-queue-partition-capacity-labels.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.scss
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/queue-navigator.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/yarn-queues.scss
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/integration/components/yarn-queue-partition-capacity-labels-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-queues.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-queue/capacity-queue.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/constants.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queues.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue-partition-capacity-labels.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-queue/capacity-queue.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/queue-navigator.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/capacity-queue.hbs


> Allow node partition filters on Queues page of new YARN UI
> --
>
> Key: YARN-7620
> URL: https://issues.apache.org/jira/browse/YARN-7620
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Vasudevan Skm
>Assignee: Vasudevan Skm
> Fix For: 3.1.0
>
> Attachments: YARN-7620.001.patch, YARN-7620.002.patch, 
> YARN-7620.003.patch, YARN-7620.004.patch
>
>
> Allow users their queues based on node labels



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7620) Allow node partition filters on Queues page of new YARN UI

2017-12-19 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7620:
--
Summary: Allow node partition filters on Queues page of new YARN UI  (was: 
Allow partition filters on Queues page)

> Allow node partition filters on Queues page of new YARN UI
> --
>
> Key: YARN-7620
> URL: https://issues.apache.org/jira/browse/YARN-7620
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Vasudevan Skm
>Assignee: Vasudevan Skm
> Fix For: 3.1.0
>
> Attachments: YARN-7620.001.patch, YARN-7620.002.patch, 
> YARN-7620.003.patch, YARN-7620.004.patch
>
>
> Allow users their queues based on node labels



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7032) [ATSv2] NPE while starting hbase co-processor when HBase authorization is enabled.

2017-12-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296918#comment-16296918
 ] 

Sunil G commented on YARN-7032:
---

jenkins came clean. +1
Committing shortly if no objections.

> [ATSv2] NPE while starting hbase co-processor when HBase authorization is 
> enabled.
> --
>
> Key: YARN-7032
> URL: https://issues.apache.org/jira/browse/YARN-7032
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-7032.01.patch, 
> hbase-yarn-regionserver-ctr-e136-1513029738776-1405-01-02.hwx.site.log
>
>
> It is seen randomly that hbase co-processor fails to start with NPE. But 
> again starting RegionServer, able to succeed in starting RS. 
> {noformat}
> 2017-08-17 05:53:13,535 ERROR 
> [RpcServer.FifoWFPBQ.priority.handler=18,queue=0,port=16020] 
> coprocessor.CoprocessorHost: The coprocessor 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor 
> threw java.lang.NullPointerException
> java.lang.NullPointerException
> at org.apache.hadoop.hbase.Tag.fromList(Tag.java:187)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor.prePut(FlowRunCoprocessor.java:102)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:885)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7032) [ATSv2] NPE while starting hbase co-processor when HBase authorization is enabled.

2017-12-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296912#comment-16296912
 ] 

genericqa commented on YARN-7032:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7032 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902793/YARN-7032.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8b37ffa8ca64 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e040c97 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18985/testReport/ |
| Max. process+thread count | 303 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18985/console |
| Powered by | Apache Yetus 

[jira] [Updated] (YARN-7622) Allow fair-scheduler configuration on HDFS

2017-12-19 Thread Greg Phillips (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Phillips updated YARN-7622:

Attachment: YARN-7622.005.patch

> Allow fair-scheduler configuration on HDFS
> --
>
> Key: YARN-7622
> URL: https://issues.apache.org/jira/browse/YARN-7622
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, resourcemanager
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
> Attachments: YARN-7622.001.patch, YARN-7622.002.patch, 
> YARN-7622.003.patch, YARN-7622.004.patch, YARN-7622.005.patch
>
>
> The FairScheduler requires the allocation file to be hosted on the local 
> filesystem on the RM node(s). Allowing HDFS to store the allocation file will 
> provide improved redundancy, more options for scheduler updates, and RM 
> failover consistency in HA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7622) Allow fair-scheduler configuration on HDFS

2017-12-19 Thread Greg Phillips (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Phillips updated YARN-7622:

Attachment: (was: YARN-7622.005.patch)

> Allow fair-scheduler configuration on HDFS
> --
>
> Key: YARN-7622
> URL: https://issues.apache.org/jira/browse/YARN-7622
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, resourcemanager
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
> Attachments: YARN-7622.001.patch, YARN-7622.002.patch, 
> YARN-7622.003.patch, YARN-7622.004.patch, YARN-7622.005.patch
>
>
> The FairScheduler requires the allocation file to be hosted on the local 
> filesystem on the RM node(s). Allowing HDFS to store the allocation file will 
> provide improved redundancy, more options for scheduler updates, and RM 
> failover consistency in HA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7620) Allow partition filters on Queues page

2017-12-19 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296872#comment-16296872
 ] 

Sunil G commented on YARN-7620:
---

Committing shortly.

> Allow partition filters on Queues page
> --
>
> Key: YARN-7620
> URL: https://issues.apache.org/jira/browse/YARN-7620
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Vasudevan Skm
>Assignee: Vasudevan Skm
> Attachments: YARN-7620.001.patch, YARN-7620.002.patch, 
> YARN-7620.003.patch, YARN-7620.004.patch
>
>
> Allow users their queues based on node labels



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7622) Allow fair-scheduler configuration on HDFS

2017-12-19 Thread Greg Phillips (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Phillips updated YARN-7622:

Attachment: YARN-7622.005.patch

> Allow fair-scheduler configuration on HDFS
> --
>
> Key: YARN-7622
> URL: https://issues.apache.org/jira/browse/YARN-7622
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, resourcemanager
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
> Attachments: YARN-7622.001.patch, YARN-7622.002.patch, 
> YARN-7622.003.patch, YARN-7622.004.patch, YARN-7622.005.patch
>
>
> The FairScheduler requires the allocation file to be hosted on the local 
> filesystem on the RM node(s). Allowing HDFS to store the allocation file will 
> provide improved redundancy, more options for scheduler updates, and RM 
> failover consistency in HA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7032) [ATSv2] NPE while starting hbase co-processor when HBase authorization is enabled.

2017-12-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296807#comment-16296807
 ] 

genericqa commented on YARN-7032:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  5m 
39s{color} | {color:red} Docker failed to build yetus/hadoop:5b98639. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7032 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902793/YARN-7032.01.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18984/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [ATSv2] NPE while starting hbase co-processor when HBase authorization is 
> enabled.
> --
>
> Key: YARN-7032
> URL: https://issues.apache.org/jira/browse/YARN-7032
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-7032.01.patch, 
> hbase-yarn-regionserver-ctr-e136-1513029738776-1405-01-02.hwx.site.log
>
>
> It is seen randomly that hbase co-processor fails to start with NPE. But 
> again starting RegionServer, able to succeed in starting RS. 
> {noformat}
> 2017-08-17 05:53:13,535 ERROR 
> [RpcServer.FifoWFPBQ.priority.handler=18,queue=0,port=16020] 
> coprocessor.CoprocessorHost: The coprocessor 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor 
> threw java.lang.NullPointerException
> java.lang.NullPointerException
> at org.apache.hadoop.hbase.Tag.fromList(Tag.java:187)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor.prePut(FlowRunCoprocessor.java:102)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:885)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7032) [ATSv2] NPE while starting hbase co-processor when HBase authorization is enabled.

2017-12-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296794#comment-16296794
 ] 

genericqa commented on YARN-7032:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  6m 
24s{color} | {color:red} Docker failed to build yetus/hadoop:5b98639. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7032 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902793/YARN-7032.01.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18983/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [ATSv2] NPE while starting hbase co-processor when HBase authorization is 
> enabled.
> --
>
> Key: YARN-7032
> URL: https://issues.apache.org/jira/browse/YARN-7032
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-7032.01.patch, 
> hbase-yarn-regionserver-ctr-e136-1513029738776-1405-01-02.hwx.site.log
>
>
> It is seen randomly that hbase co-processor fails to start with NPE. But 
> again starting RegionServer, able to succeed in starting RS. 
> {noformat}
> 2017-08-17 05:53:13,535 ERROR 
> [RpcServer.FifoWFPBQ.priority.handler=18,queue=0,port=16020] 
> coprocessor.CoprocessorHost: The coprocessor 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor 
> threw java.lang.NullPointerException
> java.lang.NullPointerException
> at org.apache.hadoop.hbase.Tag.fromList(Tag.java:187)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor.prePut(FlowRunCoprocessor.java:102)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:885)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5366) Improve handling of the Docker container life cycle

2017-12-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296782#comment-16296782
 ] 

genericqa commented on YARN-5366:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 14m 
35s{color} | {color:red} Docker failed to build yetus/hadoop:5b98639. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-5366 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902858/YARN-5366.008.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18982/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve handling of the Docker container life cycle
> ---
>
> Key: YARN-5366
> URL: https://issues.apache.org/jira/browse/YARN-5366
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>  Labels: oct16-medium
> Attachments: YARN-5366.001.patch, YARN-5366.002.patch, 
> YARN-5366.003.patch, YARN-5366.004.patch, YARN-5366.005.patch, 
> YARN-5366.006.patch, YARN-5366.007.patch, YARN-5366.008.patch
>
>
> There are several paths that need to be improved with regard to the Docker 
> container lifecycle when running Docker containers on YARN.
> 1) Provide the ability to keep a container on the NodeManager for a set 
> period of time for debugging purposes.
> 2) Support sending signals to the process in the container to allow for 
> triggering stack traces, heap dumps, etc.
> 3) Support for Docker's live restore, which means moving away from the use of 
> {{docker wait}}. (YARN-5818)
> 4) Improve the resiliency of liveliness checks (kill -0) by adding retries.
> 5) Improve the resiliency of container removal by adding retries.
> 6) Only attempt to stop, kill, and remove containers if the current container 
> state allows for it.
> 7) Better handling of short lived containers when the container is stopped 
> before the PID can be retrieved. (YARN-6305)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5366) Improve handling of the Docker container life cycle

2017-12-19 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-5366:
--
Attachment: YARN-5366.008.patch

Fixing checkstyle issues. The findbugs warning is unrelated.

> Improve handling of the Docker container life cycle
> ---
>
> Key: YARN-5366
> URL: https://issues.apache.org/jira/browse/YARN-5366
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>  Labels: oct16-medium
> Attachments: YARN-5366.001.patch, YARN-5366.002.patch, 
> YARN-5366.003.patch, YARN-5366.004.patch, YARN-5366.005.patch, 
> YARN-5366.006.patch, YARN-5366.007.patch, YARN-5366.008.patch
>
>
> There are several paths that need to be improved with regard to the Docker 
> container lifecycle when running Docker containers on YARN.
> 1) Provide the ability to keep a container on the NodeManager for a set 
> period of time for debugging purposes.
> 2) Support sending signals to the process in the container to allow for 
> triggering stack traces, heap dumps, etc.
> 3) Support for Docker's live restore, which means moving away from the use of 
> {{docker wait}}. (YARN-5818)
> 4) Improve the resiliency of liveliness checks (kill -0) by adding retries.
> 5) Improve the resiliency of container removal by adding retries.
> 6) Only attempt to stop, kill, and remove containers if the current container 
> state allows for it.
> 7) Better handling of short lived containers when the container is stopped 
> before the PID can be retrieved. (YARN-6305)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7032) [ATSv2] NPE while starting hbase co-processor when HBase authorization is enabled.

2017-12-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296602#comment-16296602
 ] 

genericqa commented on YARN-7032:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 23m  
3s{color} | {color:red} root in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-yarn-server-timelineservice-hbase in trunk 
failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-yarn-server-timelineservice-hbase in trunk 
failed. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
30s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-yarn-server-timelineservice-hbase in trunk 
failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-yarn-server-timelineservice-hbase in trunk 
failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-yarn-server-timelineservice-hbase in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-yarn-server-timelineservice-hbase in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 32s{color} 
| {color:red} hadoop-yarn-server-timelineservice-hbase in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-yarn-server-timelineservice-hbase in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 60 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
2s{color} | {color:red} The patch 768 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m  
8s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-yarn-server-timelineservice-hbase in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-yarn-server-timelineservice-hbase in the patch 
failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 32s{color} 
| {color:red} hadoop-yarn-server-timelineservice-hbase in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7032 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902793/YARN-7032.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8de0333426fe 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk 

[jira] [Updated] (YARN-7673) ClassNotFoundException: org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using hadoop-client-minicluster

2017-12-19 Thread Jeff Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Zhang updated YARN-7673:
-
Affects Version/s: 3.0.0

> ClassNotFoundException: 
> org.apache.hadoop.yarn.server.api.DistributedSchedulingAMProtocol when using 
> hadoop-client-minicluster
> --
>
> Key: YARN-7673
> URL: https://issues.apache.org/jira/browse/YARN-7673
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jeff Zhang
>
> I'd like to use hadoop-client-minicluster for hadoop downstream project, but 
> I encounter the following exception when starting hadoop minicluster.  And I 
> check the hadoop-client-minicluster, it indeed does not have this class. Is 
> this something that is missing when packaging the published jar ?
> {code}
> java.lang.NoClassDefFoundError: 
> org/apache/hadoop/yarn/server/api/DistributedSchedulingAMProtocol
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.createResourceManager(MiniYARNCluster.java:851)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:285)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7032) [ATSv2] NPE while starting hbase co-processor when HBase authorization is enabled.

2017-12-19 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296477#comment-16296477
 ] 

genericqa commented on YARN-7032:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7032 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12902793/YARN-7032.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 47f82d34804a 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e040c97 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18979/testReport/ |
| Max. process+thread count | 301 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18979/console |
| Powered by | Apache Yetus 

[jira] [Commented] (YARN-7672) hadoop-sls can not simulate huge scale of YARN

2017-12-19 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296427#comment-16296427
 ] 

Wei Yan commented on YARN-7672:
---

Interesting to know you guys running 10K node cluster. Do you have more info 
why the CPU is 100% full? If it is because of simulating NM/AM, talking to a 
real RM wouldn't solve the problem, right?

> hadoop-sls can not simulate huge scale of YARN
> --
>
> Key: YARN-7672
> URL: https://issues.apache.org/jira/browse/YARN-7672
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: zhangshilong
>Assignee: zhangshilong
> Attachments: YARN-7672.patch
>
>
> Our YARN cluster scale to nearly 10 thousands nodes. We need to do scheduler 
> pressure test.
> Using SLS,we start  2000+ threads to simulate NM and AM. But  cpu.load very 
> high to 100+. I thought that will affect  performance evaluation of 
> scheduler. 
> So I thought to separate the scheduler from the simulator.
> I start a real RM. Then SLS will register nodes to RM,And submit apps to RM 
> using RM RPC.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7032) [ATSv2] NPE while starting hbase co-processor when HBase authorization is enabled.

2017-12-19 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16296425#comment-16296425
 ] 

Rohith Sharma K S commented on YARN-7032:
-

cc :/ [~sunilg]

> [ATSv2] NPE while starting hbase co-processor when HBase authorization is 
> enabled.
> --
>
> Key: YARN-7032
> URL: https://issues.apache.org/jira/browse/YARN-7032
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-7032.01.patch, 
> hbase-yarn-regionserver-ctr-e136-1513029738776-1405-01-02.hwx.site.log
>
>
> It is seen randomly that hbase co-processor fails to start with NPE. But 
> again starting RegionServer, able to succeed in starting RS. 
> {noformat}
> 2017-08-17 05:53:13,535 ERROR 
> [RpcServer.FifoWFPBQ.priority.handler=18,queue=0,port=16020] 
> coprocessor.CoprocessorHost: The coprocessor 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor 
> threw java.lang.NullPointerException
> java.lang.NullPointerException
> at org.apache.hadoop.hbase.Tag.fromList(Tag.java:187)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor.prePut(FlowRunCoprocessor.java:102)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:885)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7032) [ATSv2] NPE while starting hbase co-processor when HBase authorization is enabled.

2017-12-19 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-7032:

Attachment: YARN-7032.01.patch

> [ATSv2] NPE while starting hbase co-processor when HBase authorization is 
> enabled.
> --
>
> Key: YARN-7032
> URL: https://issues.apache.org/jira/browse/YARN-7032
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-7032.01.patch, 
> hbase-yarn-regionserver-ctr-e136-1513029738776-1405-01-02.hwx.site.log
>
>
> It is seen randomly that hbase co-processor fails to start with NPE. But 
> again starting RegionServer, able to succeed in starting RS. 
> {noformat}
> 2017-08-17 05:53:13,535 ERROR 
> [RpcServer.FifoWFPBQ.priority.handler=18,queue=0,port=16020] 
> coprocessor.CoprocessorHost: The coprocessor 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor 
> threw java.lang.NullPointerException
> java.lang.NullPointerException
> at org.apache.hadoop.hbase.Tag.fromList(Tag.java:187)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.flow.FlowRunCoprocessor.prePut(FlowRunCoprocessor.java:102)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:885)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org